Search results for: step drill
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3006

Search results for: step drill

636 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 243
635 Fatigue Analysis of Spread Mooring Line

Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh

Abstract:

Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.

Keywords: mooring system, fatigue analysis, time domain, non-linear dynamic characteristics

Procedia PDF Downloads 324
634 Assessment of Oral and Dental Health Status of Pregnant Women in Malaga, Spain

Authors: Nepton Kiani

Abstract:

Dental decay is one of the most common chronic diseases worldwide and imposes significant costs annually on people and healthcare systems. Addressing this issue is among the important programs of the World Health Organization in the field of oral and dental disease prevention and health promotion. In this context, oral and dental health in vulnerable groups, especially pregnant women, is of greater importance due to the health maintenance of the mother and fetus. The aim of this study is to investigate the DMFT index and various factors affecting it in order to identify different factors influencing the process of dental decay and to take an effective step in reducing the progression of this disease, control, and prevention. In this cross-sectional descriptive study, 120 pregnant women attending Nepton Policlinica clinic in Malaga, Spain, were evaluated for the DMFT index and oral and dental hygiene. In this regard, interviews, precise observations, and data collection were used. Subsequently, data analysis was performed using SPSS software and employing correlation tests, Kruskal-Wallis, and Mann-Whitney tests. The DMFT index for pregnant women in three age groups 22-26, 27- 31, and 32-36 years was respectively 2.8, 4.5, and 5.6. The results of logistic regression analysis showed that demographic variables (age, education, job, economic status) and the frequency of brushing and flossing lead to preventive behavior up to 49.58 percent (P<0.05). Generally, the results indicated that oral and dental care during pregnancy is poor. Only a small number of pregnant women regularly used toothbrush and dental floss or visited the dentist regularly. On the other hand, poor performance in adopting oral and dental care was more observed in pregnant women with lower economic and educational status. The present study showed that raising the level of awareness and education on oral and dental health in pregnant women is essential. In this field, it is necessary to focus on conducting educational-care courses at the level of healthcare centers for midwives, healthcare personnel, and at the community level for families, to prevent and perform dental treatments before the pregnancy period

Keywords: Malaga, oral and dental health, pregnant women, Spain

Procedia PDF Downloads 39
633 Pediatric Hearing Aid Use: A Study Based on Data Logging Information

Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham

Abstract:

Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.

Keywords: hearing loss, hearing aid, data logging, hours of use

Procedia PDF Downloads 221
632 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 352
631 Cyclic Etching Process Using Inductively Coupled Plasma for Polycrystalline Diamond on AlGaN/GaN Heterostructure

Authors: Haolun Sun, Ping Wang, Mei Wu, Meng Zhang, Bin Hou, Ling Yang, Xiaohua Ma, Yue Hao

Abstract:

Gallium nitride (GaN) is an attractive material for next-generation power devices. It is noted that the performance of GaN-based high electron mobility transistors (HEMTs) is always limited by the self-heating effect. In response to the problem, integrating devices with polycrystalline diamond (PCD) has been demonstrated to be an efficient way to alleviate the self-heating issue of the GaN-based HEMTs. Among all the heat-spreading schemes, using PCD to cap the epitaxial layer before the HEMTs process is one of the most effective schemes. Now, the mainstream method of fabricating the PCD-capped HEMTs is to deposit the diamond heat-spreading layer on the AlGaN surface, which is covered by a thin nucleation dielectric/passivation layer. To achieve the pattern etching of the diamond heat spreader and device preparation, we selected SiN as the hard mask for diamond etching, which was deposited by plasma-enhanced chemical vapor deposition (PECVD). The conventional diamond etching method first uses F-based etching to remove the SiN from the special window region, followed by using O₂/Ar plasma to etch the diamond. However, the results of the scanning electron microscope (SEM) and focused ion beam microscopy (FIB) show that there are lots of diamond pillars on the etched diamond surface. Through our study, we found that it was caused by the high roughness of the diamond surface and the existence of the overlap between the diamond grains, which makes the etching of the SiN hard mask insufficient and leaves micro-masks on the diamond surface. Thus, a cyclic etching method was proposed to solve the problem of the residual SiN, which was left in the F-based etching. We used F-based etching during the first step to remove the SiN hard mask in the specific region; then, the O₂/Ar plasma was introduced to etch the diamond in the corresponding region. These two etching steps were set as one cycle. After the first cycle, we further used cyclic etching to clear the pillars, in which the F-based etching was used to remove the residual SiN, and then the O₂/Ar plasma was used to etch the diamond. Whether to take the next cyclic etching depends on whether there are still SiN micro-masks left. By using this method, we eventually achieved the self-terminated etching of the diamond and the smooth surface after the etching. These results demonstrate that the cyclic etching method can be successfully applied to the integrated preparation of polycrystalline diamond thin films and GaN HEMTs.

Keywords: AlGaN/GaN heterojunction, O₂/Ar plasma, cyclic etching, polycrystalline diamond

Procedia PDF Downloads 112
630 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 182
629 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 166
628 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis

Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik

Abstract:

Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.

Keywords: biomarker, glycosylation, lectin, prostate cancer

Procedia PDF Downloads 395
627 Comparison of Physical and Chemical Effects on Senescent Cells

Authors: Svetlana Guryeva, Inna Kornienko, Andrey Usanov, Dmitry Usanov, Elena Petersen

Abstract:

Every day cells in our organism are exposed to various factors: chemical agents, reactive oxygen species, ionizing radiation, and others. These factors can cause damage to DNA, cellular membrane, intracellular compartments, and proteins. The fate of cells depends on the exposure intensity and duration. The prolonged and intense exposure causes the irreversible damage accumulation, which triggers the permanent cell cycle arrest (cellular senescence) or cell death programs. In the case of low dose of impacts, it can lead to cell renovation and to cell functional state improvement. Therefore, it is a pivotal question to investigate the factors and doses that result in described positive effects. In order to estimate the influence of different agents, the proliferation index and levels of cell death markers (annexin V/propidium iodide), senescence-associated β-galactosidase, and lipofuscin were measured. The experiments were conducted on primary human fibroblasts of the 8th passage. According to the levels of mentioned markers, these cells were defined as senescent cells. The effect of low-frequency magnetic field was investigated. Different modes of magnetic field exposure were tested. The physical agents were compared with chemical agents: metformin (10 mM) and taurine (0.8 mM and 1.6 mM). Cells were incubating with chemicals for 5 days. The highest decrease in the level of senescence-associated β-galactosidase (21%) and lipofuscin (17%) was observed in the primary senescent fibroblasts after 5 days after double treatments with 48 h intervals with low-frequency magnetic field. There were no significant changes in the proliferation index after magnetic field application. The cytotoxic effect of magnetic field was not observed. The chemical agent taurine (1.6 mM) decreased the level of senescence-associated β-galactosidase (23%) and lipofuscin (22%). Metformin improved the activity of senescence-associated β-galactosidase on 15% and the level of lipofuscin on 19% in this experiment. According to these results, the effect of double treatment with 48 h interval with low-frequency magnetic field and the effect of taurine (1.6 mM) were comparable to the effect of metformin, for which anti-aging properties are proved. In conclusion, this study can become the first step towards creation of the standardized system for the investigation of different effects on senescent cells.

Keywords: biomarkers, magnetic field, metformin, primary fibroblasts, senescence, taurine

Procedia PDF Downloads 268
626 Investigation Two Polymorphism of hTERT Gene (Rs 2736098 and Rs 2736100) and miR- 146a rs2910164 Polymorphism in Cervical Cancer

Authors: Hossein Rassi, Alaheh Gholami Roud-Majany, Zahra Razavi, Massoud Hoshmand

Abstract:

Cervical cancer is multi step disease that is thought to result from an interaction between genetic background and environmental factors. Human papillomavirus (HPV) infection is the leading risk factor for cervical intraepithelial neoplasia (CIN)and cervical cancer. In other hand, some of hTERT and miRNA polymorphism may plays an important role in carcinogenesis. This study attempts to clarify the relation of hTERT genotypes and miR-146a genotypes in cervical cancer. Forty two archival samples with cervical lesion retired from Khatam hospital and 40 sample from healthy persons used as control group. A simple and rapid method was used to detect the simultaneous amplification of the HPV consensus L1 region and HPV-16,-18, -11, -31, 33 and -35 along with the b-globin gene as an internal control. We use Multiplex PCR for detection of hTERT and miR-146a rs2910164 genotypes in our lab. Finally, data analysis was performed using the 7 version of the Epi Info(TM) 2012 software and test chi-square(x2) for trend. Cervix lesions were collected from 42 patients with Squamous metaplasia, cervical intraepithelial neoplasia, and cervical carcinoma. Successful DNA extraction was assessed by PCR amplification of b-actin gene (99bp). According to the results, hTERT ( rs 2736098) GG genotype and miR-146a rs2910164 CC genotype was significantly associated with increased risk of cervical cancer in the study population. In this study, we detected 13 HPV 18 from 42 cervical cancer. The connection between several SNP polymorphism and human virus papilloma in rare researches were seen. The reason of these differences in researches' findings can result in different kinds of races and geographic situations and also differences in life grooves in every region. The present study provided preliminary evidence that a p53 GG genotype and miR-146a rs2910164 CC genotype may effect cervical cancer risk in the study population, interacting synergistically with HPV 18 genotype. Our results demonstrate that the testing of hTERT rs 2736098 genotypes and miR-146a rs2910164 genotypes in combination with HPV18 can serve as major risk factors in the early identification of cervical cancers. Furthermore, the results indicate the possibility of primary prevention of cervical cancer by vaccination against HPV18 in Iran.

Keywords: polymorphism of hTERT gene, miR-146a rs2910164 polymorphism, cervical cancer, virus

Procedia PDF Downloads 312
625 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs

Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili

Abstract:

OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.

Keywords: LWD measurements, caliper log, correlations, analysis

Procedia PDF Downloads 109
624 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 108
623 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 83
622 Food Foam Characterization: Rheology, Texture and Microstructure Studies

Authors: Rutuja Upadhyay, Anurag Mehra

Abstract:

Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.

Keywords: food foams, rheology, microstructure, texture

Procedia PDF Downloads 321
621 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies

Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi

Abstract:

There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.

Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development

Procedia PDF Downloads 227
620 Exploration Study of Civet Coffee: Amino Acids Composition and Cup Quality

Authors: Murna Muzaifa, Dian Hasni, Febriani, Anshar Patria, Amhar Abubakar

Abstract:

Coffee flavour is influenced by many factors such as processing techniques. Civet coffee is known as one of premium coffee due to its unique processing technique and its superior cupping quality. The desirable aroma of coffee is foremost formed during roasting step at a high temperature from precursors that are present in the green bean. Sugars, proteins, acids and trigonelline are the principal flavor precursors compounds in green coffee bean. It is now widely accepted that amino acids act as precursors of the Maillard reaction during which the colour and aroma are formed. To investigate amino acids on civet coffee, concentration of 20 amino acids (L-Isoleucine, L-Valine, L-Proline, L-Phenylalanine, L-Arginine, L-Asparagine, L-Threonine, L-Tryptophan, L-Leucine, L-Serine, L-Glutamine, L-Methionine, L-Histidine, Aspartic acid, L-Tyrosine, L-Lysine, L-Glutamic acid, and L-Cysteine, L-Alanine and Glycine) were determined in green and roasted bean of civet coffee by LCMS analysis. The cup quality of civet coffee performed using professional Q-grader followed SCAA standard method. The measured parameters were fragrance/aroma, flavor, acidity, body, uniformity, clean up, aftertaste, balance, sweetness and overall. The work has been done by collecting samples of civet coffee from six locations in Gayo Higland, Aceh-Indonesia. The results showed that 18 amino acids were detected in green bean of civet coffee (L-Isoleucine, L-Valine, L-Proline, L-Phenylalanine, L-Arginine, L-Asparagine, L-Threonine, L-Tryptophan, L-Leucine, L-Serine, L-Glutamine, L-Methionine, L-Histidine, Aspartic acid, L-Tyrosine, L-Lysine, L-Glutamic acid, and L-Cysteine) and 2 amino acids were not detected (L-Alanine and Glycine). On the other hand, L-Tyrosine and Glycine were not detected in roasted been of civet coffee. Glutamic acid is the amino acid with highest concentration in both green and roasted bean (21,02 mg/g and 24,60 mg/g), followed by L- Valine (19,98 mg/g and 20,22 mg/g) and Aspartic acid (14,93 mg/g and 18,58 mg/g). Civet coffee has a fairly high cupping value (cup quality), ranging from 83.75 to 84.75, categorized as speciality coffee. Moreover, civet coffee noted to have nutty, chocolaty, fishy, herby and watery.

Keywords: amino acids, civet coffee, cupping quality, luwak

Procedia PDF Downloads 177
619 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Authors: Kazunori Nomura, Hiromichi Ogi, Masaumi Nakahara, Sou Watanabe, Atsuhiro Shibata

Abstract:

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

Keywords: chemical processing facility, medium- and long-term management plan of JAEA facilities, STRAD project, treatment of radioactive waste

Procedia PDF Downloads 140
618 Magnetic Biomaterials for Removing Organic Pollutants from Wastewater

Authors: L. Obeid, A. Bee, D. Talbot, S. Abramson, M. Welschbillig

Abstract:

The adsorption process is one of the most efficient methods to remove pollutants from wastewater provided that suitable adsorbents are used. In order to produce environmentally safe adsorbents, natural polymers have received increasing attention in recent years. Thus, alginate and chitosane are extensively used as inexpensive, non-toxic and efficient biosorbents. Alginate is an anionic polysaccharide extracted from brown seaweeds. Chitosan is an amino-polysaccharide; this cationic polymer is obtained by deacetylation of chitin the major constituent of crustaceans. Furthermore, it has been shown that the encapsulation of magnetic materials in alginate and chitosan beads facilitates their recovery from wastewater after the adsorption step, by the use of an external magnetic field gradient, obtained with a magnet or an electromagnet. In the present work, we have studied the adsorption affinity of magnetic alginate beads and magnetic chitosan beads (called magsorbents) for methyl orange (MO) (an anionic dye), methylene blue (MB) (a cationic dye) and p-nitrophenol (PNP) (a hydrophobic pollutant). The effect of different parameters (pH solution, contact time, pollutant initial concentration…) on the adsorption of pollutant on the magnetic beads was investigated. The adsorption of anionic and cationic pollutants is mainly due to electrostatic interactions. Consequently methyl orange is highly adsorbed by chitosan beads in acidic medium and methylene blue by alginate beads in basic medium. In the case of a hydrophobic pollutant, which is weakly adsorbed, we have shown that the adsorption is enhanced by adding a surfactant. Cetylpyridinium chloride (CPC), a cationic surfactant, was used to increase the adsorption of PNP by magnetic alginate beads. Adsorption of CPC by alginate beads occurs through two mechanisms: (i) electrostatic attractions between cationic head groups of CPC and negative carboxylate functions of alginate; (ii) interaction between the hydrocarbon chains of CPC. The hydrophobic pollutant is adsolubilized within the surface aggregated structures of surfactant. Figure c shows that PNP can reach up to 95% of adsorption in presence of CPC. At highest CPC concentrations, desorption occurs due to the formation of micelles in the solution. Our magsorbents appear to efficiently remove ionic and hydrophobic pollutants and we hope that this fundamental research will be helpful for the future development of magnetically assisted processes in water treatment plants.

Keywords: adsorption, alginate, chitosan, magsorbent, magnetic, organic pollutant

Procedia PDF Downloads 244
617 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 83
616 The Relationship between Body Positioning and Badminton Smash Quality

Authors: Gongbing Shan, Shiming Li, Zhao Zhang, Bingjun Wan

Abstract:

Badminton originated in ancient civilizations in Europe and Asia more than 2000 years ago. Presently, it is played almost everywhere with estimated 220 million people playing badminton regularly, ranging from professionals to recreational players; and it is the second most played sport in the world after soccer. In Asia, the popularity of badminton and involvement of people surpass soccer. Unfortunately, scientific researches on badminton skills are hardly proportional to badminton’s popularity. A search of literature has shown that the literature body of biomechanical investigations is relatively small. One of the dominant skills in badminton is the forehand overhead smash, which consists of 1/5 attacks during games. Empirical evidences show that one has to adjust the body position in relation to the coming shuttlecock to produce a powerful and accurate smash. Therefore, positioning is a fundamental aspect influencing smash quality. A search of literature has shown that there is a dearth/lack of study on this fundamental aspect. The goals of this study were to determine the influence of positioning and training experience on smash quality in order to discover information that could help learn/acquire the skill. Using a 10-camera, 3D motion capture system (VICON MX, 200 frames/s) and 15-segment, full-body biomechanical model, 14 skilled and 15 novice players were measured and analyzed. Results have revealed that the body positioning has direct influence on the quality of a smash, especially on shuttlecock release angle and clearance height (passing over the net) of offensive players. The results also suggest that, for training a proper positioning, one could conduct a self-selected comfort position towards a statically hanged shuttlecock and then step one foot back – a practical reference marker for learning. This perceptional marker could be applied in guiding the learning and training of beginners. As one gains experience through repetitive training, improved limbs’ coordination would increase smash quality further. The researchers hope that the findings will benefit practitioners for developing effective training programs for beginners.

Keywords: 3D motion analysis, biomechanical modeling, shuttlecock release speed, shuttlecock release angle, clearance height

Procedia PDF Downloads 485
615 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 138
614 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase

Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick

Abstract:

This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.

Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers

Procedia PDF Downloads 152
613 Culturally Adapting Videos to Involve Nigerian Patients with Cancer in Clinical Trials

Authors: Abiola Falilat Ibraheem, Akinyimika Sowunmi, Valerie Otti

Abstract:

Background: Introduction of innovative cancer clinical trials to Nigeria is a critical step in addressing global inequities of cancer burden. Low health and clinical trial literacy among Nigerian patients have been sighted as a significant barrier to ensuring that patients enrolled in clinical trials are truly informed. Video intervention has been shown to be the most proactive method to improving patient’s clinical trial knowledge. In the US, video interventions have been successful at improving education about cancer clinical trials among minority patients. Thus, this study aimed to apply and adapt video interventions addressing attitudinal barriers peculiar to Nigerian patients. Methods: A hospital-based representative mixed-method study was conducted at the Lagos State University Teaching Hospital (LASUTH) from July to December 2020, comprising of cancer patients aged 18 and above. Patients were randomly selected during every clinic day, of which 63 patients volunteered to participate in this study. We first administered a cancer literacy survey to determine patients’ knowledge about clinical trials. For patients who had prior knowledge, a pre-intervention test was administered, after which a 15-minute video (attitudes and intention to enroll in therapeutic clinical trials (AIET)) to improve patients’ knowledge, perception, and attitudes towards clinical trials was played, and then ended by administering a post-intervention test to the patients. For patients who had no prior knowledge, the AIET video was played for them, followed by the post-intervention test. Results: Out of 63 patients sampled, 43 (68.3%) had breast cancer. On average, patients agreed to understand their cancer diagnosis and treatment very well. 84.1% of patients had never heard about cancer clinical trials, and 85.7% did not know what cancer clinical trials were. There was a strong positive relationship (r=0.916) between the pretest and posttest, which means that the intervention improved patients’ knowledge, perception, and attitudes about cancer clinical trials. In the focus groups, patients recommended adapting the video in Nigerian settings and representing all religions in order to address trust in local clinical trialists. Conclusion: Due to the small size of patients, change in clinical trial knowledge was not statistically significant. However, there is a trend suggesting that culturally adapted video interventions can be used to improve knowledge and perception about cancer clinical trials.

Keywords: clinical trials, culturally targeted intervention, patient education, video intervention

Procedia PDF Downloads 129
612 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 140
611 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)

Authors: Sultan Ben Jaber

Abstract:

Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.

Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor

Procedia PDF Downloads 59
610 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 74
609 Effect of Labisia pumila var. alata with a Structured Exercise Program in Women with Polycystic Ovarian Syndrome

Authors: D. Maryama AG. Daud, Zuliana Bacho, Stephanie Chok, DG. Mashitah PG. Baharuddin, Mohd Hatta Tarmizi, Nathira Abdul Majeed, Helen Lasimbang

Abstract:

Lifestyle, physical activity, food intake, genetics and medication are contributing factors for people getting obese. Which in some of the obese people were a low or non-responder to exercise. And obesity is very common clinical feature in women affected by Polycystic Ovarian Syndrome (PCOS). Labisia pumila var. alata (LP) is a local herb which had been widely used by Malay women in treating menstrual irregularities, painful menstruation and postpartum well-being. Therefore, this study was carried out to investigate the effect of LP with a structured exercise program on anthropometric, body composition and physical fitness performance of PCOS patients. By using a single blind and parallel study design, where by subjects were assigned into a 16-wk structured exercise program (3 times a week) interventions; (LP and exercise; LPE, and exercise only; E). All subjects in the LPE group were prescribed 200mg LP; once a day, for 16 weeks. The training heart rate (HR) was monitored based on a percentage of the maximum HR (HRmax) achieved during submaximal exercise test that was conducted at wk-0 and wk-8. The progression of aerobic exercise intensity from 25–30 min at 60 – 65% HRmax during the first week to 45 min at 75–80% HRmax by the end of this study. Anthropometric (body weight, Wt; waist circumference, WC; and hip circumference, HC), body composition (fat mass, FM; percentage body fat, %BF; Fat Free Mass, FFM) and physical fitness performance (push up to failure, PU; 1-minute Sit Up, SU; and aerobic step test, PVO2max) were measured at wk-0, wk-4, wk-8, wk-12, and wk-16. This study found that LP does not have a significant effect on body composition, anthropometric and physical fitness performance of PCOS patients underwent a structured exercise program. It means LP does not improve exercise responses of PCOS patients towards anthropometric, body composition and physical fitness performance. The overall data shows exercise responses of PCOS patients is by increasing their aerobic endurance and muscle endurance performances, there is a significant reduction in FM, PBF, HC, and Wt significantly. Therefore, exercise program for PCOS patients have to focus on aerobic fitness, and muscle endurance.

Keywords: polycystic ovarian syndrome, Labisia pumila var. alata, body composition, aerobic endurance, muscle endurance, anthropometric

Procedia PDF Downloads 201
608 Investigating Homicide Offender Typologies Based on Their Clinical Histories and Crime Scene Behaviour Patterns

Authors: Valeria Abreu Minero, Edward Barker, Hannah Dickson, Francois Husson, Sandra Flynn, Jennifer Shaw

Abstract:

Purpose – The purpose of this paper is to identify offender typologies based on aspects of the offenders’ psychopathology and their associations with crime scene behaviours using data derived from the National Confidential Enquiry into Suicide and Safety in Mental Health concerning homicides in England and Wales committed by offenders in contact with mental health services in the year preceding the offence (n=759). Design/methodology/approach – The authors used multiple correspondence analysis to investigate the interrelationships between the variables and hierarchical agglomerative clustering to identify offender typologies. Variables describing: the offender’s mental health history; the offenders’ mental state at the time of offence; characteristics useful for police investigations; and patterns of crime scene behaviours were included. Findings – Results showed differences in the offender’s histories in relation to their crime scene behaviours. Further, analyses revealed three homicide typologies: externalising, psychosis and depression. Analyses revealed three homicide typologies: externalising, psychotic and depressive. Practical implications – These typologies may assist the police during homicide investigations by: furthering their understanding of the crime or likely suspect; offering insights into crime patterns; provide advice as to what an offender’s offence behaviour might signify about his/her mental health background; findings suggest information concerning offender psychopathology may be useful for offender profiling purposes in cases of homicide offenders with schizophrenia, depression and comorbid diagnosis of personality disorder and alcohol/drug dependence. Originality/value – Empirical studies with an emphasis on offender profiling have almost exclusively focussed on the inference of offender demographic characteristics. This study provides a first step in the exploration of offender psychopathology and its integration to the multivariate analysis of offence information for the purposes of investigative profiling of homicide by identifying the dominant patterns of mental illness within homicidal behaviour.

Keywords: offender profiling, mental illness, psychopathology, multivariate analysis, homicide, crime scene analysis, crime scene behviours, investigative advice

Procedia PDF Downloads 120
607 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 154