Search results for: multi stage flash distillation
1056 A Seven Year Single-Centre Study of Dental Implant Survival in Head and Neck Oncology Patients
Authors: Sidra Suleman, Maliha Suleman, Stephen Brindley
Abstract:
Oral rehabilitation of head and neck cancer patients plays a crucial role in the quality of life for such individuals post-treatment. Placement of dental implants or implant-retained prostheses can help restore oral function and aesthetics, which is often compromised following surgery. Conventional prosthodontic techniques can be insufficient in rehabilitating such patients due to their altered anatomy and reduced oral competence. Hence, there is a strong clinical need for the placement of dental implants. With an increasing incidence of head and neck cancer patients, the demand for such treatment is rising. Aim: The aim of the study was to determine the survival rate of dental implants in head and neck cancer patients placed at the Restorative and Maxillofacial Department, Royal Stoke University Hospital (RSUH), United Kingdom. Methodology: All patients who received dental implants between January 1, 2013 to December 31, 2020 were identified. Patients were excluded based on three criteria: 1) non-head and neck cancer patients, 2) no outpatient follow-up post-implant placement 3) provision of non-dental implants. Scanned paper notes and electronic records were extracted and analyzed. Implant survival was defined as fixtures that had remained in-situ / not required removal. Sample: Overall, 61 individuals were recruited from the 143 patients identified. The mean age was 64.9 years, with a range of 35 – 89 years. The sample included 37 (60.7%) males and 24 (39.3%) females. In total, 211 implants were placed, of which 40 (19.0%) were in the maxilla, 152 (72.0%) in the mandible and 19 (9.0%) in autogenous bone graft sites. Histologically 57 (93.4%) patients had squamous cell carcinoma, with 43 (70.5%) patients having either stage IVA or IVB disease. As part of treatment, 42 (68.9%) patients received radiotherapy, which was carried out post-operatively for 29 (69.0%) cases. Whereas 21 (34.4%) patients underwent chemotherapy, 13 (61.9%) of which were post-operative. The Median follow-up period was 21.9 months with a range from 0.9 – 91.4 months. During the study, 23 (37.7%) patients died and their data was censored beyond the date of death. Results: In total, four patients who had received radiotherapy had one implant failure each. Two mandibular implants failed secondary to osteoradionecrosis, and two maxillary implants did not survive as a result of failure to osseointegrate. The overall implant survival rates were 99.1% at three years and 98.1% at both 5 and 7 years. Conclusions: Although this data shows that implant failure rates are low, it highlights the difficulty in predicting which patients will be affected. Future studies involving larger cohorts are warranted to further analyze factors affecting outcomes.Keywords: oncology, dental implants, survival, restorative
Procedia PDF Downloads 2381055 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case
Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz
Abstract:
The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.Keywords: catalyst, upgrading, aquathermolysis, steam
Procedia PDF Downloads 1131054 Investigation Two Polymorphism of hTERT Gene (Rs 2736098 and Rs 2736100) and miR- 146a rs2910164 Polymorphism in Cervical Cancer
Authors: Hossein Rassi, Alaheh Gholami Roud-Majany, Zahra Razavi, Massoud Hoshmand
Abstract:
Cervical cancer is multi step disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN)and cervical cancer. In other hand, some of hTERT and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of hTERT genotypes and miR-146a genotypes in cervical cancer. Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of hTERT and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, hTERT ( rs 2736098) GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical cancer in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of hTERT rs 2736098 genotypes and miR-146a rs2910164 genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.Keywords: polymorphism of hTERT gene, miR-146a rs2910164 polymorphism, cervical cancer, virus
Procedia PDF Downloads 3271053 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal
Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes
Abstract:
Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model
Procedia PDF Downloads 2511052 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach
Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi
Abstract:
Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems
Procedia PDF Downloads 2951051 A Multidimensional Indicator-Based Framework to Assess the Sustainability of Productive Green Roofs: A Case Study in Madrid
Authors: Francesca Maria Melucci, Marco Panettieri, Rocco Roma
Abstract:
Cities are at the forefront of achieving the sustainable development goals set out in the Sustainable Development Goals of Agenda 2030. For these reasons, increasing attention has been given to the creation of resilient, sustainable, inclusive and green cities and finding solutions to these problems is one of the greatest challenges faced by researchers today. In particular urban green infrastructures, including green roofs, play a key role in tackling environmental, social and economic problems. The starting point was an extensive literature review on 1. research developments on the benefits (environmental, economic and social) and implications of green roofs; 2. sustainability assessment and applied methodologies; 3. specific indicators to measure impacts on urban sustainability. Through this review, the appropriate qualitative and quantitative characteristics that are part of the complex 'green roof' system were identified, as studies that holistically capture its multifunctional nature are still lacking. So, this paper aims to find a method to improve community participation in green roof initiatives and support local governance processes in developing efficient proposals to achieve better sustainability and resilience of cities. To this aim, the multidimensional indicator-based framework, presented by Tapia in 2021, has been tested for the first time in the case of a green roof in the city of Madrid. The framework's set of indicators was implemented with other indicators such as those of waste management and circularity (OECD Inventory of Circular Economy indicators) and sustainability performance. The specific indicators to be used in the case study were decided after a consultation phase with relevant stakeholders. Data on the community's willingness to participate in green roof implementation initiatives were collected through interviews and online surveys with a heterogeneous sample of citizens. The results of the application of the framework suggest how the different aspects of sustainability influence the choice of a green roof and provide input on the main mechanisms involved in citizens' willingness to participate in such initiatives.Keywords: urban agriculture, green roof, urban sustainability, indicators, multi-criteria analysis
Procedia PDF Downloads 761050 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain
Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki
Abstract:
The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy
Procedia PDF Downloads 801049 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry
Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia
Abstract:
Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.Keywords: barriers, critical success factors, external factors, internal factors, quality management systems
Procedia PDF Downloads 1901048 A Hebbian Neural Network Model of the Stroop Effect
Authors: Vadim Kulikov
Abstract:
The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop
Procedia PDF Downloads 2721047 A Simulated Evaluation of Model Predictive Control
Authors: Ahmed AlNouss, Salim Ahmed
Abstract:
Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)
Procedia PDF Downloads 4101046 Advances in Health Risk Assessment of Mycotoxins in Africa
Authors: Wilfred A. Abiaa, Chibundu N. Ezekiel, Benedikt Warth, Michael Sulyok, Paul C. Turner, Rudolf Krska, Paul F. Moundipa
Abstract:
Mycotoxins are a wide range of toxic secondary metabolites of fungi that contaminate various food commodities worldwide especially in sub-Saharan Africa (SSA). Such contamination seriously compromises food safety and quality posing a serious problem for human health as well as to trade and the economy. Their concentrations depend on various factors, such as the commodity itself, climatic conditions, storage conditions, seasonal variances, and processing methods. When humans consume foods contaminated by mycotoxins, they exert toxic effects to their health through various modes of actions. Rural populations in sub-Saharan Africa, are exposed to dietary mycotoxins, but it is supposed that exposure levels and health risks associated with mycotoxins between SSA countries may vary. Dietary exposures and health risk assessment studies have been limited by lack of equipment for the proper assessment of the associated health implications on consumer populations when they eat contaminated agricultural products. As such, mycotoxin research is premature in several SSA nations with product evaluation for mycotoxin loads below/above legislative limits being inadequate. Few nations have health risk assessment reports mainly based on direct quantification of the toxins in foods ('external exposure') and linking food levels with data from food frequency questionnaires. Nonetheless, the assessment of the exposure and health risk to mycotoxins requires more than the traditional approaches. Only a fraction of the mycotoxins in contaminated foods reaches the blood stream and exert toxicity ('internal exposure'). Also, internal exposure is usually smaller than external exposure thus dependence on external exposure alone may induce confounders in risk assessment. Some studies from SSA earlier focused on biomarker analysis mainly on aflatoxins while a few recent studies have concentrated on the multi-biomarker analysis of exposures in urine providing probable associations between observed disease occurrences and dietary mycotoxins levels. As a result, new techniques that could assess the levels of exposures directly in body tissue or fluid, and possibly link them to the disease state of individuals became urgent.Keywords: mycotoxins, biomarkers, exposure assessment, health risk assessment, sub-Saharan Africa
Procedia PDF Downloads 5781045 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement
Authors: Fiona Wahr, Sitalakshmi Venkatraman
Abstract:
Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.Keywords: enabling skills, student retention, embedded learning support, continuous improvement
Procedia PDF Downloads 2501044 Analysis of the Strategic Value at the Usage of Green IT Application for the Organizational Product or Service in Order to Gain the Competitive Advantage; Case: E-Money of a Telecommunication Firm in Indonesia
Authors: I Putu Deny Arthawan Sugih Prabowo, Eko Nugroho, Rudy Hartanto
Abstract:
Known, Green IT is a concept about how to use the technology (IT) wisely, efficiently, and environmentally. However, it exists as the consequence of the rapid-growth of the technology (especially IT) currently. Not only for the environments, the usage of Green IT applications, e.g. Cloud Computing (Cloud Storage) and E-Money (E-Cash), also gives its benefits for the organizational business strategy (especially the organizational product/service strategy) in order to gain the organizational competitive advantage (to be the market leader). This paper takes the case at E-Money as a Value-Added Services (VAS) of a telecommunication firm (company) in Indonesia which it also competes with the competitors’ similar product (service). Although it has been a popular telecommunication firm’s product/service, but its strategic values for the organization (firm) is still unknown, and therefore, the aim of this paper is for analyzing its strategic values for gaining the organizational competitive advantage. However, in this paper, its strategic value analysis is viewed by how to assess (consider) its strategic benefits and also manage the challenges or risks of its implementation at the organization as an organizational product/service. Then the paper uses a research model for investigating the influences of both perceived risks and the organizational cultures to the usage of Green IT Application at the organization and also both the usage of Green IT Application at the organization and the threats-challenges of the organizational products/services to the competitive advantage of the organizational products/services. However, the paper uses the quantitative research method (collecting the information from the field respondents by using the research questionnaires) and then, the primary data is analyzed by both descriptive and inferential statistics. Also in this paper, SmartPLS is used for analyzing the primary data by the quantitative research method. Besides using the quantitative research method, the paper also uses the qualitative research method, such as interviewing the field respondent and/or directly field observation, for deeply confirming the quantitative research method’s analysis results at the certain domain, e.g. both organizational cultures and internal processes that support the usage of Green IT applications for the organizational product/service (E-Money in this paper case). However, the paper is still at an infant stage of in-progress research. Then the paper’s results may be used as a reference for the organization (firm or company) in developing the organizational business strategies, especially about the organizational product/service that relates to Green IT applications. Besides it, the paper may also be the future study, e.g. the influence of knowledge transfer about E-Money and/or other Green IT application-based products/services to the organizational service performance that relates to the product (service) in order to gain the competitive advantage.Keywords: Green IT, competitive advantage, strategic value, organization (firm or company), organizational product (service)
Procedia PDF Downloads 3101043 Pleading the Belly: Sentencing of Convicted Pregnant Women under the Common Law
Authors: Nana Yaw Ofori Gyasi
Abstract:
Under the Common Law, there was a procedure called pleading the belly which allowed a woman who had reached the advanced stage of pregnancy to receive a reprieve of her death sentence until after she had put to bed. The plea was replaced with a legislation, which provides that a pregnant woman would automatically have her death sentence commuted to life imprisonment with hard labour. This Common Law principle has been continued and enacted into law by the various countries where the Common Law is practiced. This paper takes a look at what it terms as Pregnancy Legislations in some selected Common Law countries such as United States of America, Canada, England and Wales, Ghana and India to examine the scope, procedure and effect of such legislations. The paper adopts a comparative study approach to ascertain the country with the widest scope, non-complicated procedure and far-reaching effects of the Pregnancy Legislations. It is observed that some legislations make provision for the conversion of death penalty to life imprisonment for capital offences and also prescribe non-custodial sentence for non-capital offences. There are other legislations that merely suspend the death penalty while the convict is found to be pregnant. In terms of the procedure, some of the legislations make the issue of pregnancy a question of fact to be determined by a jury and in other legislations, the trial judge makes that determination after the judge is satisfied on the question of the convict being pregnant. The effects of the Pregnancy Legislation are observed to be varying. Women who give birth in prison are highly at risk of having stillbirth. Most of the prisons do not have adequate facilities to support expectant and lactating mothers while in prison. It has also been observed that with the number of female prisoners increasing over the years, custodial sentence for convicted pregnant women has a wider societal effect. The paper identifies certain gaps left in some of the legislations which relate to the procedure to be followed after custodial sentence is suspended for a convicted pregnant woman. The time the accused person got pregnant- whether before her arrest or during trial- and the effect of the timing of the pregnancy are gaps left in some of the legislations. The paper argues that such gaps should be filled by the legislator to prevent accused persons taking undue advantage of the Pregnancy Legislations. It is further argued that if convicted pregnant women will have to spend time in prison at all for very heinous crimes, the prison facilities should be improved so that expectant and lactating mothers can comfortably care for their babies and themselves to prevent dire health consequences for such mothers and the society at a whole.Keywords: sentence of pregnant women, custodial sentence, , pregnant women, , common law
Procedia PDF Downloads 511042 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection
Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino
Abstract:
The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem
Procedia PDF Downloads 981041 The Sociocultural, Economic, and Environmental Contestations of Agbogbloshie: A Critical Review
Authors: Khiddir Iddris, Martin Oteng – Ababio, Andreas Bürkert, Christoph Scherrer, Katharina Hemmler
Abstract:
Agbogbloshie, as an informal settlement and economy where the e-waste sector thrives, has become a global hub of complex urban contestations involving sociocultural, economic, and environmental dimensions due to the implication that e-waste and informal economic patterns have on livelihoods, urbanisation, development and sustainability. Multi-author collaborations have produced an ever-growing body of literature on Agbogbloshie and the informal e-waste economy. There is, however, a dearth of an assessment of Agbogbloshie as an urban informal settlement's intricate nexus of socioecological contestations. We address this gap by systematising, from literature, the context knowledge, navigating the complex terrain of Agbogbloshie's challenges, and employing a multidimensional lens to unravel the sociocultural intricacies, economic dynamics, and environmental complexities shaping its identity. A systematic critical review approach was espoused, with a pragmatic consolidation of content analysis and controversy mapping, grounded on the concept of ‘sustainable rurbanism,’ highlighted core themes and identified contrasting viewpoints. An analytical framework is presented. Five categories – geohistorical, sociocultural, economic, environmental and future trends - are proposed as an approach to systematising the literature. The review finds that the sociocultural dimension unveils a mosaic of cultural amalgamation, communal identity, and tensions impacting community cohesion. The analysis of economic intricacies reveals the prevalence of informal economies sustaining livelihoods yet entrenching economic disparities and marginalisation. Environmental scrutiny exposes the grim realities of e-waste disposal, pollution, and land use conflicts. The findings suggest that there is a high resilience within the community and the potential for sustainable trajectories. Theoretical and conceptual synergy is limited. This review provides a comprehensive exploration, offering insights and directions for future research, policy formulation, and community-driven interventions aimed at fostering sustainable transformations in Agbogbloshie and analogous urban contexts.Keywords: Agbogbloshie, economic complexities, environmental challenges, resilience, sociocultural dynamics, sustainability, urban informal settlement
Procedia PDF Downloads 761040 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process
Abstract:
Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process
Procedia PDF Downloads 1511039 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 631038 Histopatological Analysis of Vital Organs in Cattle Infected with Lumpy Skin Disease in Rajasthan, India
Authors: Manisha, Manisha Mathur, Jay K. Desai, Shesh Asopa, Manisha Mehra
Abstract:
The present study was carried out for the comprehensive analysis of lumpy skin disease (LSD) in cattle and to elucidate the histopathology of vital organs in natural outbreaks. Lumpy skin disease (LSD) is a viral infection that primarily affects cattle. It is caused by a Capri pox virus and is characterized by the formation of skin nodules or lesions. For this study, a postmortem of 20 cows who died of Lumpy skin disease in different regions of Rajasthan was conducted. This study aimed to examine a cow's external and internal organs to confirm if lumpy skin disease was the cause of death. Accurate diagnosis is essential for improving disease surveillance, understanding the disease's progression, and informing control measures. Pathological examinations reveal virus-induced changes across organs, while histopathological analyses provide crucial insights into the disease's pathogenesis, aiding in the development of advanced diagnostics and effective prevention strategies. Histopathological examination of nodular skin lesions revealed edema, hyperemia, acanthosis, severe hydropic degeneration/ballooning degeneration, and hyperkeratosis in the epidermis. In the lungs, congestion, oedema, emphysema, and atelectasis were observed grossly. Microscopically changes were suggestive of interstitial pneumonia, suppurative pneumonia, bronchopneumonia post pneumonic fibrosis, and stage of resolution. Grossely liver showed congestion and necrotic foci microscopically in most of the cases, and the liver showed acute viral hepatitis. Microscopically in kidneys, multifocal interstitial nephritis was observed. There was marked interstitial inflammation and zonal fibrosis with cystically dilated tubules and bowman's capsules. Microscopically, most of the heart tissue section showed normal histology with few sarcocysts in between cardiac muscles. In some cases, loss of cross striation, sarcoplasmic vacuolation, fregmentation, and disintegration of cardiac fibres were observed. The present study revealed the characteristic gross and histopathological changes in different organs in natural cases of lumpy skin disease. Further, the disease was confirmed based on the molecular diagnosis and transmission electron microscopy of capripox infection in the affected cattle in the study area.Keywords: Capripoxvirus, lumpy skin disease, polymerage chain reaction, transmission electron microscopy
Procedia PDF Downloads 321037 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification
Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan
Abstract:
This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation
Procedia PDF Downloads 711036 Early Prediction of Cognitive Impairment in Adults Aged 20 Years and Older using Machine Learning and Biomarkers of Heavy Metal Exposure
Authors: Ali Nabavi, Farimah Safari, Mohammad Kashkooli, Sara Sadat Nabavizadeh, Hossein Molavi Vardanjani
Abstract:
Cognitive impairment presents a significant and increasing health concern as populations age. Environmental risk factors such as heavy metal exposure are suspected contributors, but their specific roles remain incompletely understood. Machine learning offers a promising approach to integrate multi-factorial data and improve the prediction of cognitive outcomes. This study aimed to develop and validate machine learning models to predict early risk of cognitive impairment by incorporating demographic, clinical, and biomarker data, including measures of heavy metal exposure. A retrospective analysis was conducted using 2011-2014 National Health and Nutrition Examination Survey (NHANES) data. The dataset included participants aged 20 years and older who underwent cognitive testing. Variables encompassed demographic information, medical history, lifestyle factors, and biomarkers such as blood and urine levels of lead, cadmium, manganese, and other metals. Machine learning algorithms were trained on 90% of the data and evaluated on the remaining 10%, with performance assessed through metrics such as accuracy, area under curve (AUC), and sensitivity. Analysis included 2,933 participants. The stacking ensemble model demonstrated the highest predictive performance, achieving an AUC of 0.778 and a sensitivity of 0.879 on the test dataset. Key predictors included age, gender, hypertension, education level, urinary cadmium, and blood manganese levels. The findings indicate that machine learning can effectively predict the risk of cognitive impairment using a comprehensive set of clinical and environmental exposure data. Incorporating biomarkers of heavy metal exposure improved prediction accuracy and highlighted the role of environmental factors in cognitive decline. Further prospective studies are recommended to validate the models and assess their utility over time.Keywords: cognitive impairment, heavy metal exposure, predictive models, aging
Procedia PDF Downloads 81035 Study on the Use of Manganese-Containing Materials as a Micro Fertilizer Based on the Local Mineral Resources and Industrial Wastes in Hydroponic Systems
Authors: Marine Shavlakadze
Abstract:
Hydroponic greenhouses systems (production of the artificial substrate without soil) are becoming popular in the world. Mostly the system is used to grow vegetables and berries. Different countries are taking action to participate in the development of hydroponic technology and solutions such as EU members, Turkey, Australia, New Zealand, Israel, Scandinavian countries, etc. Many vegetables and berries are grown by hydroponics in Europe. As a result of our research, we have obtained material containing manganese and nitrogen. It became possible to produce this fertilizer by means of one-stage thermal processing, using industrial waste containing manganese (ores and sludges) and mineral substance (ammonium nitrate) that exist in Georgia. The received material is usable as a micro-fertilizer with economic efficiency. It became possible to turn practically water-insoluble manganese dioxide substance into the soluble condition from industrial waste in an indirect way. The ability to use the material as a fertilizer is predetermined by its chemical and phase composition, as the amount of the active component of the material in relation to manganese is 30%. At the same time, the active component elements presented non-ballast sustained action compounds. The studies implemented in Poland and in Georgia by us have shown that the manganese-containing micro-fertilizer- Mn(NO3)2 can provide the plant with nitrate nitrogen, which is a form that can be used for plants, providing the economy and simplicity of the application of fertilizers. Given the fact that the application of the manganese-containing micro-fertilizers significantly increases the productivity and improves the quality of the big number of agricultural products, it is necessary to mention that it is recommended to introduce the manganese containing fertilizers into the following cultures: sugar beet, corn, potato, vegetables, vine grape, fruit, berries, and other cultures. Also, as a result of the study, it was established that the material obtained is the predominant fertilizer for vegetable cultures in the soil. Based on the positive results of the research, we consider it expedient to conduct research in hydroponic systems, which will enable us to provide plants the required amount of manganese; we also introduce nitrogen in solution and regulate the solution of pH, which is one of the main problems in hydroponic production. The findings of our research will be used in hydroponic greenhouse farms to increase the fertility of vegetable crops and, consequently, to get bountiful and high-quality harvests, which will promote the development of hydroponic greenhouses in Georgia as well as abroad.Keywords: hydroponics, micro-fertilizers, manganese-containing materials, industrial wastes
Procedia PDF Downloads 1341034 Territorialisation and Elections: Land and Politics in Benin
Authors: Kamal Donko
Abstract:
In the frontier zone of Benin Republic, land seems to be a fundamental political resource as it is used as a tool for socio-political mobilization, blackmail, inclusion and exclusion, conquest and political control. This paper seeks to examine the complex and intriguing interlinks between land, identity and politics in central Benin. It aims to investigate what roles territorialisation and land ownership are playing in the electioneering process in central Benin. It employs ethnographic multi-sited approach to data collections including observations, interviews and focused group discussions. Research findings reveal a complex and intriguing relationship between land ownership and politics in central Benin. Land is found to be playing a key role in the electioneering process in the region. The study has also discovered many emerging socio-spatial patterns of controlling and maintaining political power in the zone which are tied to land politics. These include identity reconstruction and integration mechanism through intermarriages, socio-political initiatives and construction of infrastructure of sovereignty. It was also found that ‘Diaspora organizations’ and identity issues; strategic creation of administrative units; alliance building strategy; gerrymandering local political field, etc. These emerging socio-spatial patterns of territorialisation for maintaining political power affect migrant and native communities’ relationships. It was also found that ‘Diaspora organizations’ and identity issues; strategic creation of administrative units; alliance building strategy; gerrymandering local political field, etc. are currently affecting migrant’s and natives’ relationships. The study argues that territorialisation is not only about national boundaries and the demarcation between different nation states, but more importantly, it serves as a powerful tool of domination and political control at the grass root level. Furthermore, this study seems to provide another perspective from which the political situation in Africa can be studied. Investigating how the dynamics of land ownership is influencing politics at the grass root or micro level, this study is fundamental to understanding spatial issues in the frontier zone.Keywords: land, migration, politics, territorialisation
Procedia PDF Downloads 3641033 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting
Authors: Daijun Chen
Abstract:
Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits
Procedia PDF Downloads 1161032 The Impact of Social Customer Relationship Management on Brand Loyalty and Reducing Co-Destruction of Value by Customers
Authors: Sanaz Farhangi, Habib Alipour
Abstract:
The main objective of this paper is to explore how social media as a critical platform would increase the interactions between the tourism sector and stakeholders. Nowadays, human interactions through social media in many areas, especially in tourism, provide various experiences and information that users share and discuss. Organizations and firms can gain customer loyalty through social media platforms, albeit consumers' negative image of the product or services. Such a negative image can be reduced through constant communication between produces and consumers, especially with the availability of the new technology. Therefore, effective management of customer relationships in social media creates an extraordinary opportunity for organizations to enhance value and brand loyalty. In this study, we seek to develop a conceptual model for addressing factors such as social media, SCRM, and customer engagement affecting brand loyalty and diminish co-destruction. To support this model, we scanned the relevant literature using a comprehensive category of ideas in the context of marketing and customer relationship management. This will allow exploring whether there is any relationship between social media, customer engagement, social customer relationship management (SCRM), co-destruction, and brand loyalty. SCRM has been explored as a moderating factor in the relationship between customer engagement and social media to secure brand loyalty and diminish co-destruction of the company’s value. Although numerous studies have been conducted on the impact of social media on customers and marketing behavior, there are limited studies for investigating the relationship between SCRM, brand loyalty, and negative e-WOM, which results in the reduction of the co-destruction of value by customers. This study is an important contribution to the tourism and hospitality industry in orienting customer behavior in social media using SCRM. This study revealed that through social media platforms, management can generate discussion and engagement about the product and services, which facilitates customers feeling in an appositive way towards the firm and its product. Study has also revealed that customers’ complaints through social media have a multi-purpose effect; it can degrade the value of the product, but at the same time, it will motivate the firm to overcome its weaknesses and correct its shortcomings. This study has also implications for the managers and practitioners, especially in the tourism and hospitality sector. Future research direction and limitations of the research were also discussed.Keywords: brand loyalty, co-destruction, customer engagement, SCRM, tourism and hospitality
Procedia PDF Downloads 1191031 Postharvest Losses and Handling Improvement of Organic Pak-Choi and Choy Sum
Authors: Pichaya Poonlarp, Danai Boonyakiat, C. Chuamuangphan, M. Chanta
Abstract:
Current consumers’ behavior trends have changed towards more health awareness, the well-being of society and interest of nature and environment. The Royal Project Foundation is, therefore, well aware of organic agriculture. The project only focused on using natural products and utilizing its highland biological merits to increase resistance to diseases and insects for the produce grown. The project also brought in basic knowledge from a variety of available research information, including, but not limited to, improvement of soil fertility and a control of plant insects with biological methods in order to lay a foundation in developing and promoting farmers to grow quality produce with a high health safety. This will finally lead to sustainability for future highland agriculture and a decrease of chemical use on the highland area which is a source of natural watershed. However, there are still shortcomings of the postharvest management in term of quality and losses, such as bruising, rottenness, wilting and yellowish leaves. These losses negatively affect the maintenance and a shelf life of organic vegetables. Therefore, it is important that a research study of the appropriate and effective postharvest management is conducted for an individual organic vegetable to minimize product loss and find root causes of postharvest losses which would contribute to future postharvest management best practices. This can be achieved through surveys and data collection from postharvest processes in order to conduct analysis for causes of postharvest losses of organic pak-choi, baby pak-choi, and choy sum. Consequently, postharvest losses reduction strategies of organic vegetables can be achieved. In this study, postharvest losses of organic pak choi, baby pak-choi, and choy sum were determined at each stage of the supply chain starting from the field after harvesting, at the Development Center packinghouse, at Chiang Mai packinghouse, at Bangkok packing house and at the Royal Project retail shop in Chiang Mai. The results showed that postharvest losses of organic pak-choi, baby pak-choi, and choy sum were 86.05, 89.05 and 59.03 percent, respectively. The main factors contributing to losses of organic vegetables were due to mechanical damage and underutilized parts and/or short of minimum quality standard. Good practices had been developed after causes of losses were identified. Appropriate postharvest handling and management, for example, temperature control, hygienic cleaning, and reducing the duration of the supply chain, postharvest losses of all organic vegetables should be able to remarkably reduced postharvest losses in the supply chain.Keywords: postharvest losses, organic vegetables, handling improvement, shelf life, supply chain
Procedia PDF Downloads 4841030 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 1211029 Intellectual Property Rights (IPR) in the Relations among Nations: Towards a Renewed Hegemony or Not
Authors: Raju K. Thadikkaran
Abstract:
Introduction: The IPR have come to the centre stage of development discourse today for a variety of reasons: It ranges from the arbitrariness in the enforcement, overlapping and mismatch with various international agreements and conventions, divergence in the definition, nature and content and the duration as well as severe adverse consequences to technologically weak developing countries. In turn, the IPR have acquired prominence in the foreign policy making as well as in the relations among nations. Quite naturally, there is ample scope for an examination of the correlation between Technology, IPR and International Relations in the contemporary world. Nature and Scope: A cursory examination of the realm of IPR and its protection shall reveals the acute divergence that exists in the perspectives, on all matters related to the very definition, nature, content, scope and duration. The proponents of stronger protection, mostly technologically advanced countries, insist on a stringent IP Regime whereas technologically weak developing countries seem to advocate for flexibilities. From the perspective of developing countries like India, one of the most crucial concerns is related to the patenting of life forms and the protection of TK and BD. There have been several instances of Bio-piracy and Bio-prospecting of the resources related to BD and TK from the Bio-rich Global South. It is widely argued that many provisions in the TRIPS are capable of offsetting the welcome provisions in the CBD such as the Access and Benefit Sharing and Prior Informed Consent. The point that is being argued out is as to how the mismatch between the provisions in the TRIPS Agreement and the CBD could be addressed in a healthy manner so that the essential minimum legitimate interests of all stakeholders could be secured thereby introducing a new direction to the international relations. The findings of this study reveal that the challenges roused by the TRIPS Regime over-weigh the opportunities. The mismatch in the provisions in this regard has generated various crucial issues such as Bio-piracy and Bio-prospecting. However, there is ample scope for managing and protecting IP through institutional innovation, legislative, executive and administrative initiative at the global, national and regional levels. The Indian experience is quite reflective of the same and efforts are being made through the new national IPR policy. This paper, employing Historical Analytical Method, has Three Sections. The First Section shall trace the correlation between the Technology, IPR and international relations. The Second Section shall review the issues and potential concerns in the protection and management of IP related to the BD and TK in the developing countries in the wake of the TRIPS and the CBD. The Final Section shall analyze the Indian Experience in this regard and the experience of the bio-rich Kerala in particular.Keywords: IPR, technology and international relations, bio-diversity, traditional knowledge
Procedia PDF Downloads 3781028 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 1711027 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape
Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin
Abstract:
It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.Keywords: daily light integral, plant design, urban open space
Procedia PDF Downloads 514