Search results for: loss distribution approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20270

Search results for: loss distribution approach

15050 The Basics of Cognitive Behavioral Family Therapy and the Treatment of Various Physical and Mental Diseases

Authors: Mahta Mohamadkashi

Abstract:

The family is the most important source of security and health for the people of the society, and at the same time, it is the main field of creating all kinds of social and psychological problems. On the one hand, a family is a natural group with many goals and roles that are important and necessary for all family members. On the other hand, the family is a strong and organized group that recruits the therapist because of the goals that are concealed in its policy and procedures. The relationship between the environment and the family background with mental illnesses has been the focus of various researchers for a long time, and the research and experiments that have been conducted to show that the functioning of the family is related to the mental health of the members of the family. Currently, several theoretical perspectives with different approaches seek to explain and resolve psychological problems and family conflicts that can be mentioned. This research aims to investigate "cognitive-behavioral family therapy" by using the "family therapy" research method which is included the descriptive-analytical method and the method of collecting library information, with special reliance on Persian and Latin books and articles. for considering one of the important approaches of family therapy that we are going which have been known as data and its conditions that also includes requirements and limitations. For this purpose, in the beginning, brief background and introduction about family and family therapy are going to describe, and then the basics of cognitive-behavioral family therapy and the implementation process and various techniques of this approach can go through a big discussion. After that, we will apply this approach in the treatment of various physical and mental diseases in the form of related research, and we will examine the ups and downs of the implementation procedures, limitations, and future directions in this field. In general, This study emphasizes the role of the family system in the occurrence of psychological diseases and disorders and also validates the role of the family system in the treatment of those diseases and disorders. Also, cognitive-behavioral family therapy has been approved as an effective treatment approach for a variety of mental disorders.

Keywords: cognitive-behavioral, family, family therapy, cognitive-behavioral family therapy

Procedia PDF Downloads 81
15049 Whale Optimization Algorithm for Optimal Reactive Power Dispatch Solution Under Various Contingency Conditions

Authors: Medani Khaled Ben Oualid

Abstract:

Most of researchers solved and analyzed the ORPD problem in the normal conditions. However, network collapses appear in contingency conditions. In this paper, ORPD under several contingencies is presented using the proposed method WOA. To ensure viability of the power system in contingency conditions, several critical cases are simulated in order to prevent and prepare the power system to face such situations. The results obtained are carried out in IEEE 30 bus test system for the solution of ORPD problem in which control of bus voltages, tap position of transformers and reactive power sources are involved. Moreover, another method, namely, Particle Swarm Optimization with Time Varying Acceleration Coefficient (PSO-TVAC) has been compared with the proposed technique. Simulation results indicate that the proposed WOA gives remarkable solution in terms of effectiveness in case of outages.

Keywords: optimal reactive power dispatch, metaheuristic techniques, whale optimization algorithm, real power loss minimization, contingency conditions

Procedia PDF Downloads 80
15048 Image Transform Based on Integral Equation-Wavelet Approach

Authors: Yuan Yan Tang, Lina Yang, Hong Li

Abstract:

Harmonic model is a very important approximation for the image transform. The harmanic model converts an image into arbitrary shape; however, this mode cannot be described by any fixed functions in mathematics. In fact, it is represented by partial differential equation (PDE) with boundary conditions. Therefore, to develop an efficient method to solve such a PDE is extremely significant in the image transform. In this paper, a novel Integral Equation-Wavelet based method is presented, which consists of three steps: (1) The partial differential equation is converted into boundary integral equation and representation by an indirect method. (2) The boundary integral equation and representation are changed to plane integral equation and representation by boundary measure formula. (3) The plane integral equation and representation are then solved by a method we call wavelet collocation. Our approach has two main advantages, the shape of an image is arbitrary and the program code is independent of the boundary. The performance of our method is evaluated by numerical experiments.

Keywords: harmonic model, partial differential equation (PDE), integral equation, integral representation, boundary measure formula, wavelet collocation

Procedia PDF Downloads 536
15047 Evaluation of Activity of Anacyclus Pyrethrum Methanolic Extract on Acute Inflammation Induced in Rats

Authors: Dalila Bouamra, Chekib Arslane Baki, Abdelhamid Bouchebour, Fatiha Koussa, Amel Benamara, Seoussen Kada

Abstract:

The activity of methanolic extract from Anacyclus pyrethrum was evaluated using λ-carrageenan 1% induced paw edema in Wistar Albinos rats. The oral administration of 200 mg/kg, 400 mg/kg and 600 mg/kg, body weight of methanolic extract, one hour before induction of inflammation, exerted a significant inhibition effect of 47%, 57% and 62% respectively after 4h λ-carrageenan treatment and highly significant inhibition effect of 57%, 66% and 75% respectively after 8h λ-carrageenan treatment, compared to non treated group (100%) and that treated with aspirin, a standard anti-inflammatory drug. On the other hand, the effect of the plant extract on stomach was macroscopically and microscopically studied. The plant extract has an impact on the loss ratio of granulocytes that have invaded the stomach after a period of inflammation at a dose of 600 mg/kg body weight.

Keywords: inflammation, Anacyclus pyrethrum, gastritis, Wistar Albinos rats

Procedia PDF Downloads 474
15046 Catalytic Cracking of Hydrocarbon over Zeolite Based Catalysts

Authors: Debdut Roy, Vidyasagar Guggilla

Abstract:

In this research, we highlight our exploratory work on modified zeolite based catalysts for catalytic cracking of hydrocarbons for production of light olefin i.e. ethylene and propylene. The work is focused on understanding the catalyst structure and activity correlation. Catalysts are characterized by surface area and pore size distribution analysis, inductively coupled plasma optical emission spectrometry (ICP-OES), Temperature Programmed Desorption (TPD) of ammonia, pyridine Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Thermo-gravimetric Analysis (TGA) and correlated with the catalytic activity. It is observed that the yield of lighter olefins increases with increase of Bronsted acid strength.

Keywords: catalytic cracking, zeolite, propylene, structure-activity correlation

Procedia PDF Downloads 200
15045 Threshold (K, P) Quantum Distillation

Authors: Shashank Gupta, Carlos Cid, William John Munro

Abstract:

Quantum distillation is the task of concentrating quantum correlations present in N imperfect copies to M perfect copies (M < N) using free operations by involving all P the parties sharing the quantum correlation. We present a threshold quantum distillation task where the same objective is achieved but using lesser number of parties (K < P). In particular, we give an exact local filtering operations by the participating parties sharing high dimension multipartite entangled state to distill the perfect quantum correlation. Later, we bridge a connection between threshold quantum entanglement distillation and quantum steering distillation and show that threshold distillation might work in the scenario where general distillation protocol like DEJMPS does not work.

Keywords: quantum networks, quantum distillation, quantum key distribution, entanglement distillation

Procedia PDF Downloads 28
15044 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 261
15043 Differential Approach to Technology Aided English Language Teaching: A Case Study in a Multilingual Setting

Authors: Sweta Sinha

Abstract:

Rapid evolution of technology has changed language pedagogy as well as perspectives on language use, leading to strategic changes in discourse studies. We are now firmly embedded in a time when digital technologies have become an integral part of our daily lives. This has led to generalized approaches to English Language Teaching (ELT) which has raised two-pronged concerns in linguistically diverse settings: a) the diverse linguistic background of the learner might interfere/ intervene with the learning process and b) the differential level of already acquired knowledge of target language might make the classroom practices too easy or too difficult for the target group of learners. ELT needs a more systematic and differential pedagogical approach for greater efficiency and accuracy. The present research analyses the need of identifying learner groups based on different levels of target language proficiency based on a longitudinal study done on 150 undergraduate students. The learners were divided into five groups based on their performance on a twenty point scale in Listening Speaking Reading and Writing (LSRW). The groups were then subjected to varying durations of technology aided language learning sessions and their performance was recorded again on the same scale. Identifying groups and introducing differential teaching and learning strategies led to better results compared to generalized teaching strategies. Language teaching includes different aspects: the organizational, the technological, the sociological, the psychological, the pedagogical and the linguistic. And a facilitator must account for all these aspects in a carefully devised differential approach meeting the challenge of learner diversity. Apart from the justification of the formation of differential groups the paper attempts to devise framework to account for all these aspects in order to make ELT in multilingual setting much more effective.

Keywords: differential groups, English language teaching, language pedagogy, multilingualism, technology aided language learning

Procedia PDF Downloads 381
15042 Toward Automatic Chest CT Image Segmentation

Authors: Angely Sim Jia Wun, Sasa Arsovski

Abstract:

Numerous studies have been conducted on the segmentation of medical images. Segmenting the lungs is one of the common research topics in those studies. Our research stemmed from the lack of solutions for automatic bone, airway, and vessel segmentation, despite the existence of multiple lung segmentation techniques. Consequently, currently, available software tools used for medical image segmentation do not provide automatic lung, bone, airway, and vessel segmentation. This paper presents segmentation techniques along with an interactive software tool architecture for segmenting bone, lung, airway, and vessel tissues. Additionally, we propose a method for creating binary masks from automatically generated segments. The key contribution of our approach is the technique for automatic image thresholding using adjustable Hounsfield values and binary mask extraction. Generated binary masks can be successfully used as a training dataset for deep-learning solutions in medical image segmentation. In this paper, we also examine the current software tools used for medical image segmentation, discuss our approach, and identify its advantages.

Keywords: lung segmentation, binary masks, U-Net, medical software tools

Procedia PDF Downloads 80
15041 A New Approach to the Boom Welding Technique by Determining Seam Profile Tracking

Authors: Muciz Özcan, Mustafa Sacid Endiz, Veysel Alver

Abstract:

In this paper we present a new approach to the boom welding related to the mobile cranes manufacturing, implementing a new method in order to get homogeneous welding quality and reduced energy usage during booms production. We aim to get the realization of the same welding quality carried out on the boom in every region during the manufacturing process and to detect the possible welding errors whether they could be eliminated using laser sensors. We determine the position of the welding region directly through our system and with the help of the welding oscillator we are able to perform a proper boom welding. Errors that may occur in the welding process can be observed by monitoring and eliminated by means of an operator. The major modification in the production of the crane booms will be their form of the booms. Although conventionally, more than one welding is required to perform this process, with the suggested concept, only one particular welding is sufficient, which will be more energy and environment-friendly. Consequently, as only one welding is needed for the manufacturing of the boom, the particular welding quality becomes more essential. As a way to satisfy the welding quality, a welding manipulator was made and fabricated. By using this welding manipulator, the risks of involving dangerous gases formed during the welding process for the operator and the surroundings are diminished as much as possible.

Keywords: boom welding, seam tracking, energy saving, global warming

Procedia PDF Downloads 332
15040 Auditory Function in Hypothyroidism as Compared to Controls

Authors: Mrunal Phatak

Abstract:

Introduction: Thyroid hormone is important for the normal function of the auditory system. Hearing impairment can occur insidiously in subclinical hypothyroidism. The present study was undertaken with the aim of evaluating audiological tests like tuning fork tests, pure tone audiometry, brainstem evoked auditory potentials (BAEPs), and auditory reaction time (ART) in hypothyroid women and in age and sex-matched controls to evaluate the effect of thyroid hormone on hearing. The objective of the study was to investigate hearing status by the audiological profile in hypothyroidism (group 1) and healthy controls (group 2) to compare the audiological profile between these groups and find the correlation of levels of TSH, T3 and T4 with the above parameters. Material and methods: A total sample size of 124 women in the age group of 30 to 50 years was recruited and divided into the Cases group comprising 62 newly diagnosed hypothyroid women and a Control group having 62 women with normal thyroid profiles. Otoscopic examination, tuning fork tests, Pure tone audiometry tests (PTA). Brain Stem Auditory Evoked Potential (BAEP) and Auditory Reaction Time (ART) were done in both ears, i.e., a total of 248 ears of all subjects. Results: By BAEPs, hearing impairment was detected in a total of 64 years (51.61%). A significant increase was seen in Wave V latency, IPL I-V and IPL III-V, and the decrease was seen in the amplitude of Wave I and V in both the ears cases. A positive correlation of Wave V latency of the Right and Left ears is seen with TSH levels (p < 0.001) and a negative correlation with T3 (>0.05) and with T4 (p < 0.01). The negative correlation of wave V amplitude of the Right and Left ears is seen with TSH levels (p < 0.001), and a significant positive correlation is seen with T3 and T4. Pure tone audiometry parameters showed hearing impairment of conductive (31.29%), sensorineural (36.29%), as well as mixed type (15.32%). Hearing loss was mild in 65.32% of ears and moderate in 17.74% of ears. Pure tone averages (PTA) were significantly increased in cases than in controls in both ears. A significant positive correlation of PTA of Right and Left ears is seen with TSH levels (p<0.05). A negative correlation between T3 and T4 is seen. A significant increase in HF ART and LF ART is seen in cases as compared to controls. A positive correlation between ART of high frequency and low frequency is seen with TSH levels and a negative correlation with T3 and T4 (p > 0.05). Conclusion: The abnormal BAEPs in hypothyroid women suggest an impaired central auditory pathway. BAEP abnormalities are indicative of a nonspecific injury in the bulbo-ponto-mesencephalic centers. The results of auditory investigations suggest a causal relationship between hypothyroidism and hearing loss. The site of lesion in the auditory pathway is probably at several levels, namely, in the middle ear and at cochlear and retrocochlear sites. Prolonged ART also suggests an impairment in central processing mechanisms. The results of the present study conclude that the probable reason for hearing impairment in hypothyroidism may be delayed impulse conduction in the acoustic nerve up to the level of the midbrain (IPL I-V, III-V), particularly the inferior colliculus (wave V). There is also impairment in central processing mechanisms, as shown by prolonged ART.

Keywords: hypothyroidism, deafness, pure tone audiometry, brain stem auditory evoked potential

Procedia PDF Downloads 20
15039 Selective Circular Dichroism Sensor Based on the Generation of Quantum Dots for Cadmium Ion Detection

Authors: Pradthana Sianglam, Wittaya Ngeontae

Abstract:

A new approach for the fabrication of cadmium ion (Cd2+) sensor is demonstrated. The detection principle is based on the in-situ generation of cadmium sulfide quantum dots (CdS QDs) in the presence of chiral thiol containing compound and detection by the circular dichroism spectroscopy (CD). Basically, the generation of CdS QDs can be done in the presence of Cd2+, sulfide ion and suitable capping compounds. In addition, the strong CD signal can be recorded if the generated QDs possess chiral property (from chiral capping molecule). Thus, the degree of CD signal change depends on the number of the generated CdS QDs which can be related to the concentration of Cd2+ (excess of other components). In this work, we use the mixture of cysteamine (Cys) and L-Penicillamine (LPA) as the capping molecules. The strong CD signal can be observed when the solution contains sodium sulfide, Cys, LPA, and Cd2+. Moreover, the CD signal is linearly related to the concentration of Cd2+. This approach shows excellence selectivity towards the detection of Cd2+ when comparing to other cation. The proposed CD sensor provides low limit detection limits around 70 µM and can be used with real water samples with satisfactory results.

Keywords: circular dichroism sensor, quantum dots, enaniomer, in-situ generation, chemical sensor, heavy metal ion

Procedia PDF Downloads 355
15038 Development of High Strength Filler Consumables by Means of Calculations and Microstructural Characterization

Authors: S. Holly, R. Schnitzer, P. Haslberger, D. Zügner

Abstract:

The development of new filler consumables necessitates a high effort regarding samples and experiments to achieve the required mechanical properties and chemistry. In the scope of the development of a metal-cored wire with the target tensile strength of 1150 MPa and acceptable impact toughness, thermodynamic and kinetic calculations via MatCalc were used to reduce the experimental work and the resources required. Micro alloying elements were used to reach the high strength as an alternative approach compared to the conventional solid solution hardening. In order to understand the influence of different micro alloying elements in more detail, the influence of different elements on the precipitation behavior in the weld metal was evaluated. Investigations of the microstructure were made via atom probe and EBSD to understand the effect of micro alloying elements. The calculated results are in accordance with the results obtained by experiments and can be explained by the microstructural investigations. On the example of aluminium, the approach is exemplified and clarifies the efficient way of development.

Keywords: alloy development, high strength steel, MatCalc, metal-cored wire

Procedia PDF Downloads 224
15037 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 274
15036 Evaluation of Occupational Doses in Interventional Radiology

Authors: Fernando Antonio Bacchim Neto, Allan Felipe Fattori Alves, Maria Eugênia Dela Rosa, Regina Moura, Diana Rodrigues De Pina

Abstract:

Interventional Radiology is the radiology modality that provides the highest dose values to medical staff. Recent researches show that personal dosimeters may underestimate dose values in interventional physicians, especially in extremities (hands and feet) and eye lens. The aim of this work was to study radiation exposure levels of medical staff in different interventional radiology procedures and estimate the annual maximum numbers of procedures (AMN) that each physician could perform without exceed the annual limits of dose established by normative. For this purpose LiF:Mg,Ti (TLD-100) dosimeters were positioned in different body regions of the interventional physician (eye lens, thyroid, chest, gonads, hand and foot) above the radiological protection vests as lead apron and thyroid shield. Attenuation values for lead protection vests were based on international guidelines. Based on these data were chosen as 90% attenuation of the lead vests and 60% attenuation of the protective glasses. 25 procedures were evaluated: 10 diagnostics, 10 angioplasty, and 5-aneurysm treatment. The AMN of diagnostic procedures was 641 for the primary interventional radiologist and 930 for the assisting interventional radiologist. For the angioplasty procedures, the AMN for primary interventional radiologist was 445 and for assisting interventional radiologist was 1202. As for the procedures of aneurism treatment, the AMN for the primary interventional radiologist was 113 and for the assisting interventional radiologist were 215. All AMN were limited by the eye lens doses already considering the use of protective glasses. In all categories evaluated, the higher dose values are found in gonads and in the lower regions of professionals, both for the primary interventionist and for the assisting, but the eyes lens dose limits are smaller than these regions. Additional protections as mobile barriers, which can be positioned between the interventionist and the patient, can decrease the exposures in the eye lens, providing a greater protection for the medical staff. The alternation of professionals to perform each type of procedure can reduce the dose values received by them over a period. The analysis of dose profiles proposed in this work showed that personal dosimeters positioned in chest might underestimate dose values in other body parts of the interventional physician, especially in extremities and eye lens. As each body region of the interventionist is subject to different levels of exposure, dose distribution in each region provides a better approach to what actions are necessary to ensure the radiological protection of medical staff.

Keywords: interventional radiology, radiation protection, occupationally exposed individual, hemodynamic

Procedia PDF Downloads 379
15035 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 105
15034 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 114
15033 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 106
15032 Some Observations on the Preparation of Zinc Hydroxide Nitrate Nanoparticles

Authors: Krasimir Ivanov, Elitsa Kolentsova, Nguyen Nguyen, Alexander Peltekov, Violina Angelova

Abstract:

The nanosized zinc hydroxide nitrate has been recently estimated as perspective foliar fertilizer, which has improved zinc solubility, but low phytotoxicity, in comparison with ZnO and other Zn containing compounds. The main problem is obtaining of stable particles with dimensions less than 100 nm. This work studies the effect of preparation conditions on the chemical compositions and particle size of the zinc hydroxide nitrates, prepared by precipitation. Zn(NO3)2.6H2O and NaOH with concentrations, ranged from 0.2 to 3.2M and the initial OH/Zn ratio from 0.5 to 1.6 were used at temperatures from 20 to 60 °C. All samples were characterized in detail by X-ray diffraction, scanning electron microscopy, differential thermal analysis and ICP. Stability and distribution of the zinc hydroxide nitrate particles were estimated too.

Keywords: zinc hydroxide nitrate, nanoparticles, preparation, foliar fertilizer

Procedia PDF Downloads 329
15031 Fracture Energy Corresponding to the Puncture/Cutting of Nitrile Rubber by Pointed Blades

Authors: Ennouri Triki, Toan Vu-Khanh

Abstract:

Resistance to combined puncture/cutting by pointed blades is an important property of gloves materials. The purpose of this study is to propose an approach derived from the fracture mechanics theory to calculate the fracture energy associated to the puncture/cutting of nitrile rubber. The proposed approach is also based on the application of a sample pre-strained during the puncture/cutting test in order to remove the contribution of friction. It was validated with two different pointed blade angles of 22.5° and 35°. Results show that the applied total fracture energy corresponding to puncture/cutting is controlled by three energies, one is the fracture energy or the intrinsic strength of the material, the other reflects the friction energy between a pointed blade and the material. For an applied pre-strain energy (or tearing energy) of high value, the friction energy is completely removed. Without friction, the total fracture energy is constant. In that case, the fracture contribution of the tearing energy is marginal. Growth of the crack is thus completely caused by the puncture/cutting by a pointed blade. Finally, results suggest that the value of the fracture energy corresponding to puncture/cutting by pointed blades is obtained at a frictional contribution of zero.

Keywords: elastomer, energy, fracture, friction, pointed blades

Procedia PDF Downloads 286
15030 Experimental Measurements of Evacuated Enclosure Thermal Insulation Effectiveness for Vacuum Flat Plate Solar Thermal Collectors

Authors: Paul Henshall, Philip Eames, Roger Moss, Stan Shire, Farid Arya, Trevor Hyde

Abstract:

Encapsulating the absorber of a flat plate solar thermal collector in vacuum by an enclosure that can be evacuated can result in a significant increase in collector performance and achievable operating temperatures. This is a result of the thermal insulation effectiveness of the vacuum layer surrounding the absorber, as less heat is lost during collector operation. This work describes experimental thermal insulation characterization tests of prototype vacuum flat plate solar thermal collectors that demonstrate the improvement in absorber heat loss coefficients. Furthermore, this work describes the selection and sizing of a getter, suitable for maintaining the vacuum inside the enclosure for the lifetime of the collector, which can be activated at low temperatures.

Keywords: vacuum, thermal, flat-plate solar collector, insulation

Procedia PDF Downloads 377
15029 An Attempt to Improve Student´s Understanding on Thermal Conductivity Using Thermal Cameras

Authors: Mariana Faria Brito Francisquini

Abstract:

Many thermal phenomena are present and play a substantial role in our daily lives. This presence makes the study of this area at both High School and University levels a very widely explored topic in the literature. However, a lot of important concepts to a meaningful understanding of the world are neglected at the expense of a traditional approach with senseless algebraic problems. In this work, we intend to show how the introduction of new technologies in the classroom, namely thermal cameras, can work in our favor to make a clearer understanding of many of these concepts, such as thermal conductivity. The use of thermal cameras in the classroom tends to diminish the everlasting abstractness in thermal phenomena as they enable us to visualize something that happens right before our eyes, yet we cannot see it. In our study, we will provide the same amount of heat to metallic cylindrical rods of the same length, but different materials in order to study the thermal conductivity of each one. In this sense, the thermal camera allows us to visualize the increase in temperature along each rod in real time enabling us to infer how heat is being transferred from one part of the rod to another. Therefore, we intend to show how this approach can contribute to the exposure of students to more enriching, intellectually prolific, scenarios than those provided by traditional approaches.

Keywords: teaching physics, thermal cameras, thermal conductivity, thermal physics

Procedia PDF Downloads 264
15028 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 32
15027 Testing of Gas Turbine KingTech with Biodiesel

Authors: Nicolas Lipchak, Franco Aiducic, Santiago Baieli

Abstract:

The present work is a part of the research project called ‘Testing of gas turbine KingTech with biodiesel’, carried out by the Department of Industrial Engineering of the National Technological University at Buenos Aires. The research group aims to experiment with biodiesel in a gas turbine Kingtech K-100 to verify the correct operation of it. In this sense, tests have been developed to obtain real data of parameters inherent to the work cycle, to be used later as parameters of comparison and performance analysis. In the first instance, the study consisted in testing the gas turbine with a mixture composition of 50% Biodiesel and 50% Diesel. The parameters arising from the measurements made were compared with the parameters of the gas turbine with a composition of 100% Diesel. In the second instance, the measured parameters were used to calculate the power generated and the thermal efficiency of the Kingtech K-100 turbine. The turbine was also inspected to verify the status of the internals due to the use of biofuels. The conclusions obtained allow empirically demonstrate that it is feasible to use biodiesel in this type of gas turbines, without the use of this fuel generates a loss of power or degradation of internals.

Keywords: biodiesel, efficiency, KingTech, turbine

Procedia PDF Downloads 228
15026 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level

Procedia PDF Downloads 211
15025 Development of a Congestion Controller of Computer Network Using Artificial Intelligence Algorithm

Authors: Mary Anne Roa

Abstract:

Congestion in network occurs due to exceed in aggregate demand as compared to the accessible capacity of the resources. Network congestion will increase as network speed increases and new effective congestion control methods are needed, especially for today’s very high speed networks. To address this undeniably global issue, the study focuses on the development of a fuzzy-based congestion control model concerned with allocating the resources of a computer network such that the system can operate at an adequate performance level when the demand exceeds or is near the capacity of the resources. Fuzzy logic based models have proven capable of accurately representing a wide variety of processes. The model built is based on bandwidth, the aggregate incoming traffic and the waiting time. The theoretical analysis and simulation results show that the proposed algorithm provides not only good utilization but also low packet loss.

Keywords: congestion control, queue management, computer networks, fuzzy logic

Procedia PDF Downloads 377
15024 Mining News Deserts: Impact of Local Newspaper's Closure on Political Participation and Engagement in Rural Australian Town of Lightning Ridge

Authors: Marco Magasic

Abstract:

This article examines how a local newspaper’s closure impacts the way everyday people in a rural Australian town are informed about and engage with political affairs. It draws on a two-month focused ethnographic study in the outback town of Lighting Ridge, New South Wales and explores people’s media-related practices following the closure of the towns’ only newspaper, The Ridge News, in 2015. While social media is considered to have partly filled the news void, there is an increasingly fragmented and less vibrant local public sphere that has led to growing complacency among individuals about political affairs. Local residents highlight a dearth of reliable, credible information and lament the loss of the newspaper and its role in community advocacy and fostering people’s engagement with political institutions, especially local government.

Keywords: public sphere, political participation, local news, democratic deficit

Procedia PDF Downloads 142
15023 Long-Term Trends of Sea Level and Sea Surface Temperature in the Mediterranean Sea

Authors: Bayoumy Mohamed, Khaled Alam El-Din

Abstract:

In the present study, 24 years of gridded sea level anomalies (SLA) from satellite altimetry and sea surface temperature (SST) from advanced very-high-resolution radiometer (AVHRR) daily data (1993-2016) are used. These data have been used to investigate the sea level rising and warming rates of SST, and their spatial distribution in the Mediterranean Sea. The results revealed that there is a significant sea level rise in the Mediterranean Sea of 2.86 ± 0.45 mm/year together with a significant warming of 0.037 ± 0.007 °C/year. The high spatial correlation between sea level and SST variations suggests that at least part of the sea level change reported during the period of study was due to heating of surface layers. This indicated that the steric effect had a significant influence on sea level change in the Mediterranean Sea.

Keywords: altimetry, AVHRR, Mediterranean Sea, sea level and SST changes, trend analysis

Procedia PDF Downloads 178
15022 The Distribution of Prevalent Supplemental Nutrition Assistance Program-Authorized Food Store Formats Differ by U.S. Region and Rurality: Implications for Food Access and Obesity Linkages

Authors: Bailey Houghtaling, Elena Serrano, Vivica Kraak, Samantha Harden, George Davis, Sarah Misyak

Abstract:

United States (U.S.) Department of Agriculture Supplemental Nutrition Assistance Program (SNAP) participants are low-income Americans receiving federal dollars for supplemental food and beverage purchases. Participants use a variety of (traditional/non-traditional) SNAP-authorized stores for household dietary purchases - also representing food access points for all Americans. Importantly consumers' food and beverage purchases from non-traditional store formats tend to be higher in saturated fats, added sugars, and sodium when compared to purchases from traditional (e.g., grocery/supermarket) formats. Overconsumption of energy-dense and low-nutrient food and beverage products contribute to high obesity rates and adverse health outcomes that differ in severity among urban/rural U.S. locations and high/low-income populations. Little is known about the SNAP-authorized food store format landscape nationally, regionally, or by urban-rural status, as traditional formats are currently used as the gold standard in food access research. This research utilized publicly available U.S. databases to fill this large literature gap and to provide insight into modes of food access for vulnerable U.S. populations: (1) SNAP Retailer Locator which provides a list of all authorized food stores in the U.S., and; (2) Rural-Urban Continuum Codes (RUCC) that categorize U.S. counties as urban (RUCC 1-3) or rural (RUCC 4-9). Frequencies were determined for the highest occurring food store formats nationally and within two regionally diverse U.S. states – Virginia in the east and California in the west. Store format codes were assigned (e.g., grocery, drug, convenience, mass merchandiser, supercenter, dollar, club, or other). RUCC was applied to investigate state-level differences in urbanity-rurality regarding prevalent food store formats and Chi Square test of independence was used to determine if food store format distributions significantly (p < 0.05) differed by region or rurality. The resulting research sample that represented highly prevalent SNAP-authorized food stores nationally included 41.25% of all SNAP stores in the U.S. (N=257,839), comprised primarily of convenience formats (31.94%) followed by dollar (25.58%), drug (19.24%), traditional (10.87%), supercenter (6.85%), mass merchandiser (1.62%), non-food store or restaurant (1.81%), and club formats (1.09%). Results also indicated that the distribution of prevalent SNAP-authorized formats significantly differed by state. California had a lower proportion of traditional (9.96%) and a higher proportion of drug (28.92%) formats than Virginia- 11.55% and 19.97%, respectively (p < 0.001). Virginia also had a higher proportion of dollar formats (26.11%) when compared to California (10.64%) (p < 0.001). Significant differences were also observed for rurality variables (p < 0.001). Prominently, rural Virginia had a significantly higher proportion of dollar formats (41.71%) when compared to urban Virginia (21.78%) and rural California (21.21%). Non-traditional SNAP-authorized formats are highly prevalent and significantly differ in distribution by U.S. region and rurality. The largest proportional difference was observed for dollar formats where the least nutritious consumer purchases are documented in the literature. Researchers/practitioners should investigate non-traditional food stores at the local level using these research findings and similar applied methodologies to determine how access to various store formats impact obesity prevalence. For example, dollar stores may be prime targets for interventions to enhance nutritious consumer purchases in rural Virginia while targeting drug formats in California may be more appropriate.

Keywords: food access, food store format, nutrition interventions, SNAP consumers

Procedia PDF Downloads 128
15021 Magnetic Study on Ybₐ₂Cu₃O₇₋δ Nanoparticles Doped by Ferromagnetic Nanoparticles of Y₃Fe₅O₁₂

Authors: Samir Khene

Abstract:

Present and future industrial uses of high critical temperature superconductors require high critical temperatures TC and strong current densities JC. These two aims constitute the two motivations of scientific research in this domain. The most significant feature of any superconductor, from the viewpoint of uses, is the maximum electrical transport current density that this superconductor is capable of withstanding without loss of energy. In this work, vortices pinning in conventional and high-TC superconductors will be studied. Our experiments on vortices pinning in single crystals and nanoparticles of YBₐ₂Cu₃O₇₋δ and La₁.₈₅ Sr₀.₁₅CuO will be presented. It will be given special attention to the study of the YBₐ₂Cu₃O₇₋δ nanoparticles doped by ferromagnetic nanoparticles of Y₃Fe₅O₁₂. The ferromagnetism and superconductivity coexistence in this compound will be demonstrated, and the influence of these ferromagnetic nanoparticles on the variations of the critical current density JC in YBₐ₂Cu₃O7₇₋δ nanoparticles as a function of applied field H and temperature T will be studied.

Keywords: superconductors, high critical temperature, vortices pinning, nanoparticles, ferromagnetism, coexistence

Procedia PDF Downloads 57