Search results for: regulatory mechanism
1331 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation
Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam
Abstract:
Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model
Procedia PDF Downloads 1111330 The Mechanism of Upgrading and Urban Development in the Egyptian City: Case Study of Damietta
Authors: Lina Fayed Amin
Abstract:
The research studied, in the beginning, the related urban concepts such as the urban, development, urban development. As it also deals with the upgrading, urban upgrading, community participation and the role of local administration in development and upgrading projects. Then it studies some regional upgrading & urban development projects in Egypt followed by international projects, and the analysis the strategies followed in dealing with these projects. Afterwards, we state the regional aspects of both Damietta governorate & city, dealing with its potentials & development constraints. Followed by studying the upgrading and urban development projects strategies in reflection to the city’s crucial problems, and the constraints that faced the upgrading & development project. Then, it studied the implementation of the project’s strategies & it provided the financial resources needed for the development project in Damietta city. Followed by the studying of the urban and human development projects in the upgrading of Damietta city, as well as analyzing the different projects &analyzing the results of these projects on the aspects of the city’s needs. Then the research analysis in comparison the upgrading and urban development project in Damietta and the regional upgrading and development projects in Egypt. As well as the comparison between the upgrading and urban development project and the international projects in some Arabic and foreign countries in relation to the goals, problems, obstacles, the community participation, the finance resources and the results. Finally, it reviews the results and recommendations that were reached as a result of studying the similar urban upgrading projects in Egypt and in some Arabic and foreign countries. Followed by the analytical analysis of the upgrading and urban development in EgyptKeywords: Damietta city, urban development, upgrading mechanisms, urban upgrading
Procedia PDF Downloads 4251329 Removal of Gaseous Pollutant from the Flue Gas in a Submerged Self-Priming Venturi Scrubber
Authors: Manisha Bal, B. C. Meikap
Abstract:
Hydrogen chloride is the most common acid gas emitted by the industries. HCl gas is listed as Title III hazardous air pollutant. It causes severe threat to the human health as well as environment. So, removal of HCl from flue gases is very imperative. In the present study, submerged self-priming venturi scrubber is chosen to remove the HCl gas with water as a scrubbing liquid. Venturi scrubber is the most popular device for the removal of gaseous pollutants. Main mechanism behind the venturi scrubber is the polluted gas stream enters at converging section which accelerated to maximum velocity at throat section. A very interesting thing in case of submerged condition, venturi scrubber is submerged inside the liquid tank and liquid is entered at throat section because of suction created due to large pressure drop generated at the throat section. Maximized throat gas velocity atomizes the entered liquid into number of tiny droplets. Gaseous pollutant HCl is absorbed from gas to liquid droplets inside the venturi scrubber due to interaction between the gas and water. Experiments were conducted at different throat gas velocity, water level and inlet concentration of HCl to enhance the HCl removal efficiency. The effect of throat gas velocity, inlet concentration of HCl, and water level on removal efficiency of venturi scrubber has been evaluated. Present system yielded very high removal efficiency for the scrubbing of HCl gas which is more than 90%. It is also concluded that the removal efficiency of HCl increases with increasing throat gas velocity, inlet HCl concentration, and water level height.Keywords: air pollution, HCl scrubbing, mass transfer, self-priming venturi scrubber
Procedia PDF Downloads 1421328 Microstructural Characterization of Creep Damage Evolution in Welded Inconel 600 Superalloy
Authors: Lourdes Yareth Herrera-Chavez, Alberto Ruiz, Victor H. Lopez
Abstract:
Superalloys are used in components that operate at high temperatures such as pressure vessels and heat exchanger tubing. Design standards for these components must consider creep resistance among other criteria. Fusion welding processes are commonly used in the industry to join such components. Fusion processes commonly generate three distinctive zones, i.e. heat affected zone (HAZ), namely weld metal (WM) and base metal (BM). In nickel-based superalloy, the microstructure developed during fusion welding dictates the mechanical response of the welded component and it is very important to establish these effects in the mechanical response of the component. In this work, two plates of Inconel 600 superalloy were Gas Metal Arc Welded (GMAW). Creep samples were cut and milled to specifications and creep tested at a temperature (650 °C) using stress level of 350, 300, 275, 250 and 200 MPa. Microstructural analysis results showed a progressive creep damage evolution that depends on the stress levels with a preferential accumulation of creep damage at the heat affected zone where the creep rupture preferentially occurs owing to an austenitic matrix with grain boundary precipitated of the type Cr23C6. The fractured surfaces showed dimple patterns of cavity and voids. Results indicated that the damage mechanism is due to cavity growth by the combined effect of the power law and diffusion creep.Keywords: austenitic microstructure, creep damage evolution, heat affected zone, vickers microhardness
Procedia PDF Downloads 2031327 Effects of Low Sleep Efficiency and Sleep Deprivation on Driver Physical Fatigue
Authors: Chen-Yu Tsai, Wen-Te Liu, Chen-Chen Lo, Kang Lo, Yin-Tzu Lin
Abstract:
Background: Driving drowsiness related to insufficient or disordered sleep accounts for a major percentage of vehicular accidents. Sleep deprivation is the primary reason related to low sleep efficiency. Nevertheless, the mechanism of sleep deprivation induces driving fatigue to remain unclear. Objective: The objective of this study is to associate the relationship between insufficient sleep efficiency and driving fatigue. Methodologies: The physical condition while driving was obtained from the questionnaires to classify the state of driving fatigue. Sleep efficiency was quantified as the polysomnography (PSG), and the sleep stages were sentenced by the reregistered Technologist during examination in a hospital in New Taipei City (Taiwan). The independent T-test was used to investigate the correlation between sleep efficiency, sleep stages ratio, and driving drowsiness. Results: There were 880 subjects recruited in this study, who had been done polysomnography for evaluating severity for obstructive sleep apnea syndrome (OSAS) as well as completed the driver condition questionnaire. Four-hundred-eighty-four subjects (55%) were classified as fatigue group, and 396 subjects (45%) were served as the control group. The ratio of stage three sleep (N3) (0.032 ± 0.056) in fatigue group were significantly lower than the control group (p < 0.01). The significantly higher value of snoring index (242.14 ± 205.51 /hours) was observed in the fatigue group (p < 0.01). Conclusion: We observe the considerable correlation between deep sleep reduce and driving drowsiness. To avoid drowsy driving, the sleep deprivation, and the snoring events during the sleeping time should be monitored and alleviated.Keywords: driving drowsiness, sleep deprivation, stage three sleep, snoring index
Procedia PDF Downloads 1461326 Patient’s Knowledge and Use of Sublingual Glyceryl Trinitrate Therapy in Taiping Hospital, Malaysia
Authors: Wan Azuati Wan Omar, Selva Rani John Jasudass, Siti Rohaiza Md. Saad
Abstract:
Introduction & objective: The objectives of this study were to assess patient’s knowledge of appropriate sublingual glyceryl trinitrate (GTN) use as well as to investigate how patients commonly store and carry their sublingual GTN tablets. Methodology: This was a cross-sectional survey, using a validated researcher-administered questionnaire. The study involved cardiac patients receiving sublingual GTN attending the outpatient and inpatient departments of Taiping Hospital, a non-academic public care hospital. The minimum calculated sample size was 92, but 100 patients were conveniently sampled. Respondents were interviewed on 3 areas, including demographic data, knowledge and use of sublingual GTN. Eight items were used to calculate each subject’s knowledge score and six items were used to calculate use score. Results: Of the 96 patients who consented to participate, majority (96.9%) were well aware of the indication of sublingual GTN. With regards to the mechanism of action of sublingual GTN, 73 (76%) patients did not know how the medication works. Majority of the patients (66.7%) knew about the proper storage of the tablet. In relation to the maximum number of sublingual GTN tablets that can be taken during each angina episode, 36.5% did not know that up to 3 tablets of sublingual GTN can be taken during each episode of angina. Fifty four (56.2%) patients were not aware that they need to replace sublingual GTN every 8 weeks after receiving the tablets. Majority (69.8%) of the patients demonstrated lack of knowledge with regards to the use of sublingual GTN as prevention of chest pain. Conclusion: Overall, patients’ knowledge regarding the self administration of sublingual GTN is still inadequate. The findings support the need for more frequent reinforcement of patient education, especially in the areas of preventive use, storage and drug stability.Keywords: glyceryl trinitrate, knowledge, adherence, patient education
Procedia PDF Downloads 3981325 Estimation of Bio-Kinetic Coefficients for Treatment of Brewery Wastewater
Authors: Abimbola M. Enitan, J. Adeyemo
Abstract:
Anaerobic modeling is a useful tool to describe and simulate the condition and behaviour of anaerobic treatment units for better effluent quality and biogas generation. The present investigation deals with the anaerobic treatment of brewery wastewater with varying organic loads. The chemical oxygen demand (COD) and total suspended solids (TSS) of the influent and effluent of the bioreactor were determined at various retention times to generate data for kinetic coefficients. The bio-kinetic coefficients in the modified Stover–Kincannon kinetic and methane generation models were determined to study the performance of anaerobic digestion process. At steady-state, the determination of the kinetic coefficient (K), the endogenous decay coefficient (Kd), the maximum growth rate of microorganisms (µmax), the growth yield coefficient (Y), ultimate methane yield (Bo), maximum utilization rate constant Umax and the saturation constant (KB) in the model were calculated to be 0.046 g/g COD, 0.083 (dˉ¹), 0.117 (d-¹), 0.357 g/g, 0.516 (L CH4/gCODadded), 18.51 (g/L/day) and 13.64 (g/L/day) respectively. The outcome of this study will help in simulation of anaerobic model to predict usable methane and good effluent quality during the treatment of industrial wastewater. Thus, this will protect the environment, conserve natural resources, saves time and reduce cost incur by the industries for the discharge of untreated or partially treated wastewater. It will also contribute to a sustainable long-term clean development mechanism for the optimization of the methane produced from anaerobic degradation of waste in a close system.Keywords: brewery wastewater, methane generation model, environment, anaerobic modeling
Procedia PDF Downloads 2701324 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations
Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge
Abstract:
Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism
Procedia PDF Downloads 681323 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling
Authors: Ghita Benayad
Abstract:
Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market
Procedia PDF Downloads 471322 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain
Authors: Jia Zhang, Fengmei Yao, Yanjing Tan
Abstract:
The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain
Procedia PDF Downloads 3761321 A Study of the Challenges in Adoption of Renewable Energy in Nigeria
Authors: Farouq Sule Garo, Yahaya Yusuf
Abstract:
The purpose of this study is to investigate why there is a general lack of successful adoption of sustainable energy in Nigeria. This is particularly important given the current global campaign for net-zero emissions. The 26th United Nations Conference of the Parties (COP26), held in 2021, was hosted by the UK, in Glasgow, where, amongst other things, countries including Nigeria agreed to a zero emissions pact. There is, therefore, an obligation on the part of Nigeria for transition from fossil fuel-based economy to a sustainable net-zero emissions economy. The adoption of renewable energy is fundamental to achieving this ambitious target if decarbonisation of economic activities were to become a reality. Nigeria has an abundance of sources of renewable energy and yet there has been poor uptake and where attempts have been made to develop and harness renewable energy resources, there has been limited success. It is not entirely clear why this is the case. When analysts allude to corruption as the reason for failure for successful adoption of renewable energy or project implementation, it is arguable that corruption alone cannot explain the situation. Therefore, there is the need for a thorough investigation into the underlying issues surrounding poor uptake of renewable energy in Nigeria. This pilot study, drawing upon stakeholders’ theory, adopts a multi-stakeholder’ perspectives to investigate the influence and impacts of economic, political, technological, social factors in adoption of renewable energy in Nigeria. The research will also investigate how these factors shape (or fail to shape) strategies for achieving successful adoption of renewable energy in the country. A qualitative research methodology has been adopted given the nature of the research requiring in-depth studies in specific settings rather than a general population survey. There will be a number of interviews and each interview will allow thorough probing of sources. This, in addition to the six interviews that have already been conducted, primarily focused on economic dimensions of the challenges in adoption of renewable energy. The six participants in these initial interviews were all connected to the Katsina Wind Farm Project that was conceived and built with the view to diversifying Nigeria's energy mix and capitalise on the vast wind energy resources in the northern region. The findings from the six interviews provide insights into how the economic factors impacts on the wind farm project. Some key drivers have been identified, including strong governmental support and the recognition of the need for energy diversification. These drivers have played crucial roles in initiating and advancing the Katsina Wind Farm Project. In addition, the initial analysis has highlighted various challenges encountered during the project's implementation, including financial, regulatory, and environmental aspects. These challenges provide valuable lessons that can inform strategies to mitigate risks and improve future wind energy projects.Keywords: challenges in adoption of renewable energy, economic factors, net-zero emission, political factors
Procedia PDF Downloads 411320 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 1941319 Study of Aqueous Solutions: A Dielectric Spectroscopy Approach
Authors: Kumbharkhane Ashok
Abstract:
The time domain dielectric relaxation spectroscopy (TDRS) probes the interaction of a macroscopic sample with a time-dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the DRS technique covers an extensive dynamical process, its corresponding frequency range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy which yield information on the motions of individual molecules. An experimental set up for Time Domain Reflectometry (TDR) technique from 10 MHz to 30 GHz has been developed for the aqueous solutions. This technique has been very simple and covers a wide band of frequencies in the single measurement. Dielectric Relaxation Spectroscopy is especially sensitive to intermolecular interactions. The complex permittivity spectra of aqueous solutions have been fitted using Cole-Davidson (CD) model to determine static dielectric constants and relaxation times for entire concentrations. The heterogeneous molecular interactions in aqueous solutions have been discussed through Kirkwood correlation factor and excess properties.Keywords: liquid, aqueous solutions, time domain reflectometry
Procedia PDF Downloads 4441318 A Discrete Element Method Centrifuge Model of Monopile under Cyclic Lateral Loads
Authors: Nuo Duan, Yi Pik Cheng
Abstract:
This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.Keywords: cyclic loading, DEM, numerical modelling, sands
Procedia PDF Downloads 3211317 ZnO / TiO2 Nanoparticles for Degradation of Cyanide Ion
Authors: Masoumeh Tabatabaee, Zahra Shahryarzadeh, Masoud R. Shishebor
Abstract:
Advanced oxidation process (AOPs) is alternative method for the complete degradation many organic pollutants. When a photocatalyst absorbs radiation whose energy hν > Eg an ē from its filled valance band (VB) is promoted to its conduction band (CB) and valance band holes h+ are formed. Electron would reduce any available species, including O2, water and hydroxide ion to form hydroxyl radicals. ZnO and TiO2 are important photocatalysts with high catalytic activity that have attracted much research attention. TiO2 can only absorb a small portion of solar spectrum in the UV region and many methods such as dye sensitization, doping of other metals and using TiO2 with another semiconductor have been used to improve the photocatalytic activity of TiO2 under solar irradiation. Studies have shown that the use of metal oxides or sulfide such as WO3, MoO3, SiO2, MgO, ZnO, and CdS with TiO2 can significantly enhance the photocatalytic activity of TiO2. Due to similarity of photodegradation mechanism of ZnO with TiO2, it is a suitable semiconductor using with TiO2 and recently nanosized bicomponent TiO2-ZnO photocatalysts were prepared and used for degradation of some pollutants. In this study, Nano-sized ZnO/TiO2 composite was synthesized. Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD) and scanning electron microscope (SEM) were used to characterize the structure and morphology of it. The effect of photocatalytic activity of prepared ZnO/TiO2 on the degradation of cyanide ion under UV was investigated. The effect of various parameters such as ZnO/TiO2 concentration, amount of photocatalyst, amount of H2O2, initial dye or cyanide ion concentration, pH and irradiation time on were investigated. Results show that more than 95% of 4 mgL-1 cyanide ion degraded after 60-min reaction time and under UV irradiation.Keywords: photodegradation, ZnO/TiO2, nanoparticle, cyanide ion
Procedia PDF Downloads 3951316 Uncovering Anti-Hypertensive Obesity Targets and Mechanisms of Metformin, an Anti-Diabetic Medication
Authors: Lu Yang, Keng Po Lai
Abstract:
Metformin, a well-known clinical drug against diabetes, is found with potential anti-diabetic and anti-obese benefits, as reported in increasing evidences. However, the current clinical and experimental investigations are not to reveal the detailed mechanisms of metformin-anti-obesity/hypertension. We have used the bioinformatics strategy, including network pharmacology and molecular docking methodology, to uncover the key targets and pathways of bioactive compounds against clinical disorders, such as cancers, coronavirus disease. Thus, in this report, the in-silico approach was utilized to identify the hug targets, pharmacological function, and mechanism of metformin against obesity and hypertension. The networking analysis identified 154 differentially expressed genes of obesity and hypertension, 21 interaction genes, and 6 hug genes of metformin treating hypertensive obesity. As a result, the molecular docking findings indicated the potent binding capability of metformin with the key proteins, including interleukin 6 (IL-6) and chemokine (C-C motif) Ligand 2 (CCL2), in hypertensive obesity. The metformin-exerted anti-hypertensive obesity action involved in metabolic regulation, inflammatory reaction. And the anti-hypertensive obesity mechanisms of metformin were revealed, including regulation of inflammatory and immunological signaling pathways for metabolic homeostasis in tissue and microenvironmental melioration in blood pressure. In conclusion, our identified findings with bioinformatics analysis have demonstrated the detailed hug and pharmacological targets, biological functions, and signaling pathways of metformin treating hypertensive obesity.Keywords: metformin, obesity, hypertension, bioinformatics findings
Procedia PDF Downloads 1231315 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 1881314 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 1021313 Financial Analysis of the Foreign Direct in Mexico
Authors: Juan Peña Aguilar, Lilia Villasana, Rodrigo Valencia, Alberto Pastrana, Martin Vivanco, Juan Peña C
Abstract:
Each year a growing number of companies entering Mexico in search of the domestic market share. These activities, including stores, telephone long distance and local raw materials and energy, and particularly the financial sector, have managed to significantly increase its weight in the flows of FDI in Mexico , however, you should consider whether these trends FDI are positive for the Mexican economy and these activities increase Mexican exports in the medium term , and its share in GDP , gross fixed capital formation and employment. In general stresses that these activities, by far, have been unable to significantly generate linkages with the rest of the economy, a process that has not favored with competitiveness policies and activities aimed at these neutral or horizontal. Since the nineties foreign direct investment (FDI) has shown a remarkable dynamism, both internationally and in Latin America and in Mexico. Only in Mexico the first recipient of FDI in importance in Latin America during 1990-1995 and was displaced by Brazil since FDI increased from levels below 1 % of GDP during the eighties to around 3 % of GDP during the nineties. Its impact has been significant not only from a macroeconomic perspective , it has also allowed the generation of a new industrial production structure and organization, parallel to a significant modernization of a segment of the economy. The case of Mexico also is particularly interesting and relevant because the destination of FDI until 1993 had focused on the purchase of state assets during privatization process. This paper aims to present FDI flows in Mexico and analyze the different business strategies that have been touched and encouraged by the FDI. On the one hand, looking briefly discuss regulatory issues and source and recipient of FDI sectors. Furthermore, the paper presents in more detail the impacts and changes that generated the FDI contribution of FDI in the Mexican economy , besides the macroeconomic context and later legislative changes that resulted in the current regulations is examined around FDI in Mexico, including aspects of the Free Trade Agreement (NAFTA). It is worth noting that foreign investment can not only be considered from the perspective of the receiving economic units. Instead, these flows also reflect the strategic interests of transnational corporations (TNCs) and other companies seeking access to markets and increased competitiveness of their production networks and global distribution, among other reasons. Similarly it is important to note that foreign investment in its various forms is critically dependent on historical and temporal aspects. Thus, the same functionality can vary significantly depending on the specific characteristics of both receptor units as sources of FDI, including macroeconomic, institutional, industrial organization, and social aspects, among others.Keywords: foreign direct investment (FDI), competitiveness, neoliberal regime, globalization, gross domestic product (GDP), NAFTA, macroeconomic
Procedia PDF Downloads 4501312 Soil-Structure Interaction Models for the Reinforced Foundation System – A State-of-the-Art Review
Authors: Ashwini V. Chavan, Sukhanand S. Bhosale
Abstract:
Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model.’ The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models.Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil
Procedia PDF Downloads 1231311 Developing Offshore Energy Grids in Norway as Capability Platforms
Authors: Vidar Hepsø
Abstract:
The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model
Procedia PDF Downloads 681310 DEKA-1 a Dose-Finding Phase 1 Trial: Observing Safety and Biomarkers using DK210 (EGFR) for Inoperable Locally Advanced and/or Metastatic EGFR+ Tumors with Progressive Disease Failing Systemic Therapy
Authors: Spira A., Marabelle A., Kientop D., Moser E., Mumm J.
Abstract:
Background: Both interleukin-2 (IL-2) and interleukin-10 (IL-10) have been extensively studied for their stimulatory function on T cells and their potential to obtain sustainable tumor control in RCC, melanoma, lung, and pancreatic cancer as monotherapy, as well as combination with PD-1 blockers, radiation, and chemotherapy. While approved, IL-2 retains significant toxicity, preventing its widespread use. The significant efforts undertaken to uncouple IL-2 toxicity from its anti-tumor function have been unsuccessful, and early phase clinical safety observed with PEGylated IL-10 was not met in a blinded Phase 3 trial. Deka Biosciences has engineered a novel molecule coupling wild-type IL-2 to a high affinity variant of Epstein Barr Viral (EBV) IL-10 via a scaffold (scFv) that binds to epidermal growth factor receptors (EGFR). This patented molecule, termed DK210 (EGFR), is retained at high levels within the tumor microenvironment for days after dosing. In addition to overlapping and non-redundant anti-tumor function, IL-10 reduces IL-2 mediated cytokine release syndrome risks and inhibits IL-2 mediated T regulatory cell proliferation. Methods: DK210 (EGFR) is being evaluated in an open-label, dose-escalation (Phase 1) study with 5 (0.025-0.3 mg/kg) monotherapy dose levels and (expansion cohorts) in combination with PD-1 blockers, or radiation or chemotherapy in patients with advanced solid tumors overexpressing EGFR. Key eligibility criteria include 1) confirmed progressive disease on at least one line of systemic treatment, 2) EGFR overexpression or amplification documented in histology reports, 3) at least a 4 week or 5 half-lives window since last treatment, and 4) excluding subjects with long QT syndrome, multiple myeloma, multiple sclerosis, myasthenia gravis or uncontrolled infectious, psychiatric, neurologic, or cancer disease. Plasma and tissue samples will be investigated for pharmacodynamic and predictive biomarkers and genetic signatures associated with IFN-gamma secretion, aiming to select subjects for treatment in Phase 2. Conclusion: Through successful coupling of wild-type IL-2 with a high affinity IL-10 and targeting directly to the tumor microenvironment, DK210 (EGFR) has the potential to harness IL-2 and IL-10’s known anti-cancer promise while reducing immunogenicity and toxicity risks enabling safe concomitant cytokine treatment with other anti-cancer modalities.Keywords: cytokine, EGFR over expression, interleukine-2, interleukine-10, clinical trial
Procedia PDF Downloads 861309 Design of RF Generator and Its Testing in Heating of Nickel Ferrite Nanoparticles
Authors: D. Suman, M. Venkateshwara Rao
Abstract:
Cancer is a disease caused by an uncontrolled division of abnormal cells in a part of the body, which is affecting millions of people leading to death. Even though there have been tremendous developments taken place over the last few decades the effective therapy for cancer is still not a reality. The existing techniques of cancer therapy are chemotherapy and radio therapy which are having their limitations in terms of the side effects, patient discomfort, radiation hazards and the localization of treatment. This paper describes a novel method for cancer therapy by using RF-hyperthermia application of nanoparticles. We have synthesized ferromagnetic nanoparticles and characterized by using XRD and TEM. These nanoparticles after the biocompatibility studies will be injected in to the body with a suitable tracer element having affinity to the specific tumor site. When RF energy is applied to the nanoparticles at the tumor site it produces heat of excess room temperature and nearly 41-45°C is sufficient to kill the tumor cells. We have designed a RF source generator provided with a temperature feedback controller to control the radiation induced temperature of the tumor site. The temperature control is achieved through a negative feedback mechanism of the thermocouple and a relay connected to the power source of the RF generator. This method has advantages in terms of its effect like localized therapy, less radiation, and no side effects. It has several challenges in designing the RF source provided with coils suitable for the tumour site, biocompatibility of the nanomaterials, cooling system design for the RF coil. If we can overcome these challenges this method will be a huge benefit for the society.Keywords: hyperthermia, cancer therapy, RF source generator, nanoparticles
Procedia PDF Downloads 4601308 Effect of Nigella Sativa Seeds and Ajwa Date on Blood Glucose Level in Saudi Patients with Type 2 Diabetes Mellitus
Authors: Reham Algheshairy, Khaled Tayeb, Christopher Smith, Rebecca Gregg, Haruna Musa
Abstract:
Background: Diabetes is a medical condition that refers to the pancreas’ inability to secrete sufficient insulin levels, a hormone responsible for controlling glucose levels in the body. Any surplus glucose in the blood stream is excreted through the urinary system. Insulin resistance in blood cells can also cause this condition despite the fact that the pancreas is producing the required amount of insulin A number of researchers claim that the prevalence of diabetes in Saudi Arabia has reached epidemic proportions, although one study did observe one positive in the rise in the awareness of diabetes, possibly indicative of Saudi Arabia’s improving healthcare system. While a number of factors can cause diabetes, the ever-increasing incidence of the disease in Saudi Arabia has been blamed primarily on low levels of physical activity and high levels of obesity. Objectives: The project has two aims. The first aim of the project is to investigate the regulatory effects of consumption of Nigella seeds and Ajwah dates on blood glucose levels in diabetic patients with type 2 diabetes. The second aim of the project is to investigate whether these dietary factors may have potentially beneficial effects in controlling the complications that associated with type 2 diabetes. Methods: This use a random-cross intervention trail of 75 Saudi male and female with type 2 diabetes in Al-Noor hospital in Makkah ( KSA) aged between 18 and 70 years were divided into 3 groups. Group 1 will consume 2g of Nigella Sativa seeds daily along with a modified diet for 12 weeks, group 2 will be given Ajwah dates daily with a modified diet for 12 weeks and group 3 will follow a modified diet for 12 weeks. Anthropometric measurements were taken at baseline, along with bloods for HbA1c, fasting blood sugar and at the end of 12 weeks. Results: This study found significant decrease in blood level (FBG & 2PPBG) and HbA1c in the groups with diet and Nigella seeds) compared to Ajwa date. However, there is no significant change were found in HbA1c, FBG and 2hrpp regarding Ajwa group. Conclusion: This study illustrated a significant improvement in some markers of glycaemia following 2 g of Ns and diet for 12 weeks. The dose of 2g/day of consumed Nigella seeds was found to be more effective in controlling BGL and HbA1c than control and Ajwa groups. This suggests that Nigella seeds and following a diet may have a potential effect (a role in controlling outcomes for type 2 diabetes and controlling the disease). Further research is needed on a large scale to determine the optimum dose and duration of Nigella and Ajwa in order to achieve the desired results.Keywords: type 2 diabetes, Nigella seeds, Ajwa dates, fasting blood glucose, control
Procedia PDF Downloads 2951307 Theoretical and Experimental Investigation of Binder-free Trimetallic Phosphate Nanosheets
Authors: Iftikhar Hussain, Muhammad Ahmad, Xi Chen, Li Yuxiang
Abstract:
Transition metal phosphides and phosphates are newly emerged electrode material candidates in energy storage devices. For the first time, we report uniformly distributed, interconnected, and well-aligned two-dimensional nanosheets made from trimetallic Zn-Co-Ga phosphate (ZCGP) electrode materials with preserved crystal phase. It is found that the ZCGP electrode material exhibits about 2.85 and 1.66 times higher specific capacity than mono- and bimetallic phosphate electrode materials at the same current density. The trimetallic ZCGP electrode exhibits superior conductivity, lower internal resistance (IR) drop, and high Coulombic efficiency compared to mono- and bimetallic phosphate. The charge storage mechanism is studied for mono- bi- and trimetallic electrode materials, which illustrate the diffusion-dominated battery-type behavior. By means of density functional theory (DFT) calculations, ZCGP shows superior metallic conductivity due to the modified exchange splitting originating from 3d-orbitals of Co atoms in the presence of Zn and Ga. Moreover, a hybrid supercapacitor (ZCGP//rGO) device is engineered, which delivered a high energy density (ED) of 40 W h kg⁻¹ and a high-power density (PD) of 7,745 W kg⁻¹, lighting 5 different colors of light emitting diodes (LEDs). These outstanding results confirm the promising battery-type electrode materials for energy storage applications.Keywords: trimetallic phosphate, nanosheets, DFT calculations, hybrid supercapacitor, binder-free, synergistic effect
Procedia PDF Downloads 2101306 Students' Ability to Solve Complex Accounting Problems Using a Framework-Based Approach
Authors: Karen Odendaal
Abstract:
Accounting transactions are becoming more complex, and more extensive accounting guidance is provided on a continuous basis. It is widely perceived that conceptual teaching of accounting contributes to lifelong learning. Such a conceptual teaching approach also contributes to effective accounting problem-solving. This framework-based approach is rooted in educational psychologies such as constructivism and Ausubel’s subsumption theory. This study aimed at investigating the ability of students to solve complex accounting problems by using only concepts underlying the Conceptual Framework. An assignment was administered to pre-graduate students at a South African university and this study made use of an interpretative research design which implemented multiple research instruments to investigate the ability of students to solve complex accounting problems using only concepts underlying the Conceptual Framework. Student perceptions were analysed and were aided by a related reflective questionnaire. The importance of the study indicates the necessity of Accounting educators to enhance a conceptual understanding among students as a mechanism for problem-solving of accounting issues. The results indicate that the ability of students to solve accounting problems effectively using only the Conceptual Framework depends on the complexity of the scenario and the students’ familiarity with the problem. The study promotes a balanced and more conceptual (rather than only technical) preference to the problem-solving of complex accounting problems. The study indubitably promotes considerable emphasis on the importance of the Conceptual Framework in accounting education and the promotion of life-long learning in the subject field.Keywords: accounting education, conceptual teaching, constructivism, framework-based, problem-solving
Procedia PDF Downloads 2331305 High-Temperature Behavior of Boiler Steel by Friction Stir Processing
Authors: Supreet Singh, Manpreet Kaur, Manoj Kumar
Abstract:
High temperature corrosion is an imperative material degradation method experienced in thermal power plants and other energy generation sectors. Metallic materials such as ferritic steels have special properties such as easy fabrication and machinibilty, low cost, but a serious drawback of these materials is the worsening in properties initiating from the interaction with the environments. The metallic materials do not endure higher temperatures for extensive period of time because of their poor corrosion resistance. Friction Stir Processing (FSP), has emerged as the potent surface modification means and control of microstructure in thermo mechanically heat affecting zones of various metal alloys. In the current research work, FSP was done on the boiler tube of SA 210 Grade A1 material which is regularly used by thermal power plants. The strengthening of SA210 Grade A1 boiler steel through microstructural refinement by Friction Stir Processing (FSP) and analyze the effect of the same on high temperature corrosion behavior. The high temperature corrosion performance of the unprocessed and the FSPed specimens were evaluated in the laboratory using molten salt environment of Na₂SO₄-82%Fe₂(SO₄). The unprocessed and FSPed low carbon steel Gr A1 evaluation was done in terms of microstructure, corrosion resistance, mechanical properties like hardness- tensile. The in-depth characterization was done by EBSD, SEM/EDS and X-ray mapping analyses with an aim to propose the mechanism behind high temperature corrosion behavior of the FSPed steel.Keywords: boiler steel, characterization, corrosion, EBSD/SEM/EDS/XRD, friction stir processing
Procedia PDF Downloads 2381304 Captive Insurance in Hong Kong and Singapore: A Promising Risk Management Solution for Asian Companies
Authors: Jin Sheng
Abstract:
This paper addresses a promising area of insurance sector to develop in Asia. Captive insurance, which provides risk-mitigation services for its parent company, has great potentials to develop in energy, infrastructure, agriculture, logistics, catastrophe, and alternative risk transfer (ART), and will greatly affect the framework of insurance industry. However, the Asian captive insurance market only takes a small proportion in the global market. The recent supply chain interruption case of Hanjin Shipping indicates the significance of risk management for an Asian company’s sustainability and resilience. China has substantial needs and great potentials to develop captive insurance, on account of the currency volatility, enterprises’ credit risks, and legal and operational risks of the Belt and Road initiative. Up to date, Mainland Chinese enterprises only have four offshore captives incorporated by CNOOC, Sinopec, Lenovo and CGN Power), three onshore captive insurance companies incorporated by CNPC, China Railway, and COSCO, as well as one industrial captive insurance organization - China Ship-owners Mutual Assurance Association. Its captive market grows slowly with one or two captive insurers licensed yearly after September 2011. As an international financial center, Hong Kong has comparative advantages in taxation, professionals, market access and well-established financial infrastructure to develop a functional captive insurance market. For example, Hong Kong’s income tax for an insurance company is 16.5%; while China's income tax for an insurance company is 25% plus business tax of 5%. Furthermore, restrictions on market entry and operations of China’s onshore captives make establishing offshore captives in international or regional captive insurance centers such as Singapore, Hong Kong, and other overseas jurisdictions to become attractive options. Thus, there are abundant business opportunities in this area. Using methodology of comparative studies and case analysis, this paper discusses the incorporation, regulatory issues, taxation and prospect of captive insurance market in Hong Kong, China and Singapore. Hong Kong and Singapore are both international financial centers with prominent advantages in tax concessions, technology, implementation, professional services, and well-functioning legal system. Singapore, as the domicile of 71 active captives, has been the largest captive insurance hub in Asia, as well as an established reinsurance hub. Hong Kong is an emerging captive insurance hub with 5 to 10 newly licensed captives each year, according to the Hong Kong Financial Services Development Council. It is predicted that Hong Kong will become a domicile for 50 captive insurers by 2025. This paper also compares the formation of a captive in Singapore with other jurisdictions such as Bermuda and Vermont.Keywords: Alternative Risk Transfer (ART), captive insurance company, offshore captives, risk management, reinsurance, self-insurance fund
Procedia PDF Downloads 2291303 Arsenic Removal from Drinking Water by Hybrid Hydrogel-Biochar Matrix: An Understanding of Process Parameters
Authors: Vibha Sinha, Sumedha Chakma
Abstract:
Arsenic (As) contamination in drinking water is a serious concern worldwide resulting in severe health maladies. To tackle this problem, several hydrogel based matrix which selectively uptake toxic metals from contaminated water has increasingly been examined as a potential practical method for metal removal. The major concern in hydrogels is low stability of matrix, resulting in poor performance. In this study, the potential of hybrid hydrogel-biochar matrix synthesized from natural plant polymers, specific for As removal was explored. Various compositional and functional group changes of the elements contained in the matrix due to the adsorption of As were identified. Moreover, to resolve the stability issue in hydrogel matrix, optimum and effective mixing of hydrogel with biochar was studied. Mixing varied proportions of matrix components at the time of digestion process was tested. Preliminary results suggest that partial premixing methods may increase the stability and reduce cost. Addition of nanoparticles and specific catalysts with different concentrations of As(III) and As(V) under batch conditions was performed to study their role in performance enhancement of the hydrogel matrix. Further, effect of process parameters, optimal uptake conditions and detailed mechanism derived from experimental studies were suitably conducted. This study provides an efficient, specific and a low-cost As removal method that offers excellent regeneration abilities which can be reused for value.Keywords: arsenic, catalysts, hybrid hydrogel-biochar, water purification
Procedia PDF Downloads 1911302 The Role of Public Management Development in Enhancing Public Service Delivery in the South African Local Government
Authors: Andrew Enaifoghe
Abstract:
The study examined the role of public management development in enhancing public service delivery in the South African local government. The study believes that the ultimate empowerment of the third tier sphere of governments in South Africa remains the instrument required to enhance both national and continental development. This over the year has been overwhelmed with problems and imbalance related to ethical practice, accountability and the functional local government system and the machinery itself. The study finds that imbalances are being strengthened by a lack of understanding and unanimity as to what a public management development in a democratic system is and how it should work to achieve the dividends of democracy in delivering public goods. Studies indicated that the magnitudes are widespread corruption and misrepresentations of government priorities; both of which weaken the ability of governments to enhance broad-based economic growth and social well-being of the people. This study addressed the problem of public management and accountable local government. The study indicates the need for citizens’ participation in the decision-making process in delivering public service in South Africa and how its accountability mechanism supports good governance. The study concludes that good and ethical watersheds in South Africa have since reached such proportions that social pressure, the pressure from the government and various institutions have to re-consider where they stand regarding ethics, ethical behaviour, accountability and professionalism in delivering public goods to the people at the local municipal government.Keywords: accountability, development, democratic system, South Africa
Procedia PDF Downloads 125