Search results for: failure detection and prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7721

Search results for: failure detection and prediction

611 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅

Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio

Abstract:

Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.

Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio

Procedia PDF Downloads 168
610 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 128
609 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 193
608 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score

Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah

Abstract:

Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.

Keywords: BOBI, burns, FLAMES, scoring systems, outcome

Procedia PDF Downloads 336
607 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 208
606 Funding Innovative Activities in Firms: The Ownership Structure and Governance Linkage - Evidence from Mongolia

Authors: Ernest Nweke, Enkhtuya Bavuudorj

Abstract:

The harsh realities of the scandalous failure of several notable corporations in the past two decades have inextricably resulted in a surge in corporate governance studies. Nevertheless, little or no attention has been paid to corporate governance studies in Mongolian firms and much less to the comprehension of the correlation among ownership structure, corporate governance mechanisms and trend of innovative activities. Innovation is the bed rock of enterprise success. However, the funding and support for innovative activities in many firms are to a great extent determined by the incentives provided by the firm’s internal and external governance mechanisms. Mongolia is an East Asian country currently undergoing a fast-paced transition from socialist to democratic system and it is a widely held view that private ownership as against public ownership fosters innovation. Hence, following the privatization policy of Mongolian Government which has led to the transfer of the ownership of hitherto state controlled and state directed firms to private individuals and organizations, expectations are high that sufficient motivation would be provided for firm managers to engage in innovative activities. This research focuses on the relationship between ownership structure, corporate governance on one hand and the level of innovation on the hand. The paper is empirical in nature and derives data from both reliable secondary and primary sources. Secondary data for the study was in respect of ownership structure of Mongolian listed firms and innovation trend in Mongolia generally. These were analyzed using tables, charts, bars and percentages. Personal interviews and surveys were held to collect primary data. Primary data was in respect of corporate governance practices in Mongolian firms and were collected using structured questionnaire. Out of a population of three hundred and twenty (320) companies listed on the Mongolian Stock Exchange (MSE), a sample size of thirty (30) randomly selected companies was utilized for the study. Five (5) management level employees were surveyed in each selected firm giving a total of one hundred and fifty (150) respondents. Data collected were analyzed and research hypotheses tested using Chi-Square test statistic. Research results showed that corporate governance mechanisms were better and have significantly improved overtime in privately held as opposed to publicly owned firms. Consequently, the levels of innovation in privately held firms were considerably higher. It was concluded that a significant and positive relationship exists between private ownership and good corporate governance on one hand and the level of funding provided for innovative activities in Mongolian firms on the other hand.

Keywords: corporate governance, innovation, ownership structure, stock exchange

Procedia PDF Downloads 197
605 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 407
604 Clinical Value of 18F-FDG-PET Compared with CT Scan in the Detection of Nodal and Distant Metastasis in Urothelial Carcinoma or Bladder Cancer

Authors: Mohammed Al-Zubaidi, Katherine Ong, Pravin Viswambaram, Steve McCombie, Oliver Oey, Jeremy Ong, Richard Gauci, Ronny Low, Dickon Hayne

Abstract:

Objective: Lymph node involvement along with distant metastasis in a patient with invasive bladder cancer determines the disease survival, therefeor, it is an essential determinant of the therapeutic management and outcome. This retrospective study aims to determine the accuracy of FDG PET scan in detecting lymphatic involvement and distant metastatic urothelial cancer compared to conventional CT staging. Method: A retrospective review of 76 patients with UC or BC who underwent surgery or confirmatory biopsy that was staged with both CT and 18F-FDG-PET (up to 8 weeks apart) between 2015 and 2020. Fifty-sevenpatients (75%) had formal pelvic LN dissection or biopsy of suspicious metastasis. 18F-FDG-PET reports for positive sites were qualitative depending on SUV Max. On the other hand, enlarged LN by RECIST criteria 1.1 (>10 mm) and other qualitative findings suggesting metastasis were considered positive in CT scan. Histopathological findings from surgical specimens or image-guided biopsies were considered the gold standard in comparison to imaging reports. 18F-FDG-avid or enlarged pelvic LNs with surgically proven nodal metastasis were considered true positives. Performance characteristics of 18F-FDG-PET and CT, including sensitivity, specificity, positive predictive value (PPV), and negative predictive value (PPV), were calculated. Results: Pelvic LN involvement was confirmed histologically in 10/57 (17.5%) patients. Sensitivity, specificity, PPV and NPV of CT for detecting pelvic LN metastases were 41.17% (95% CI:18-67%), 100% (95% CI:90-100%) 100% (95% CI:59-100%) and 78.26% (95% CI:64-89%) respectively. Sensitivity, specificity, PPV and NPV of 18F-FDG-PET for detecting pelvic LN metastases were 62.5% (95% CI:35-85%), 83.78% (95% CI:68-94%), 62.5% (95% CI:35-85%), and 83.78% (95% CI:68-94%) respectively. Pre-operative staging with 18F-FDG-PET identified the distant metastatic disease in 9/76 (11.8%) patients who were occult on CT. This retrospective study suggested that 18F-FDG-PET may be more sensitive than CT for detecting pelvic LN metastases. 7/76 (9.2%) patients avoided cystectomy due to 18F-FDG-PET diagnosed metastases that were not reported on CT. Conclusion: 18F-FDG-PET is more sensitive than CT for pelvic LN metastases, which can be used as the standard modality of bladder cancer staging, as it may change the treatment by detecting lymph node metastasis that was occult in CT. Further research involving randomised controlled trials comparing the diagnostic yield of 18F-FDG-PET and CT in detecting nodal and distant metastasis in UC or BC is warranted to confirm our findings.

Keywords: FDG PET, CT scan, urothelial cancer, bladder cancer

Procedia PDF Downloads 122
603 Determination of Circulating Tumor Cells in Breast Cancer Patients by Electrochemical Biosensor

Authors: Gökçe Erdemir, İlhan Yaylım, Serap Erdem-Kuruca, Musa Mutlu Can

Abstract:

It has been determined that the main reason for the death of cancer disease is caused by metastases rather than the primary tumor. The cells that leave the primary tumor and enter the circulation and cause metastasis in the secondary organs are called "circulating tumor cells" (CTCs). The presence and number of circulating tumor cells has been associated with poor prognosis in many major types of cancer, including breast, prostate, and colorectal cancer. It is thought that knowledge of circulating tumor cells, which are seen as the main cause of cancer-related deaths due to metastasis, plays a key role in the diagnosis and treatment of cancer. The fact that tissue biopsies used in cancer diagnosis and follow-up are an invasive method and are insufficient in understanding the risk of metastasis and the progression of the disease have led to new searches. Liquid biopsy tests performed with a small amount of blood sample taken from the patient for the detection of CTCs are easy and reliable, as well as allowing more than one sample to be taken over time to follow the prognosis. However, since these cells are found in very small amounts in the blood, it is very difficult to capture them and specially designed analytical techniques and devices are required. Methods based on the biological and physical properties of the cells are used to capture these cells in the blood. Early diagnosis is very important in following the prognosis of tumors of epithelial origin such as breast, lung, colon and prostate. Molecules such as EpCAM, vimentin, and cytokeratins are expressed on the surface of cells that pass into the circulation from very few primary tumors and reach secondary organs from the circulation, and are used in the diagnosis of cancer in the early stage. For example, increased EpCAM expression in breast and prostate cancer has been associated with prognosis. These molecules can be determined in some blood or body fluids to be taken from patients. However, more sensitive methods are required to be able to determine when they are at a low level according to the course of the disease. The aim is to detect these molecules found in very few cancer cells with the help of sensitive, fast-sensing biosensors, first in breast cancer cells reproduced in vitro and then in blood samples taken from breast cancer patients. In this way, cancer cells can be diagnosed early and easily and effectively treated.

Keywords: electrochemical biosensors, breast cancer, circulating tumor cells, EpCAM, Vimentin, Cytokeratins

Procedia PDF Downloads 261
602 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 237
601 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 201
600 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 93
599 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources

Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan

Abstract:

Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.

Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization

Procedia PDF Downloads 187
598 Update on Epithelial Ovarian Cancer (EOC), Types, Origin, Molecular Pathogenesis, and Biomarkers

Authors: Salina Yahya Saddick

Abstract:

Ovarian cancer remains the most lethal gynecological malignancy due to the lack of highly sensitive and specific screening tools for detection of early-stage disease. The OSE provides the progenitor cells for 90% of human ovarian cancers. Recent morphologic, immunohistochemical and molecular genetic studies have led to the development of a new paradigm for the pathogenesis and origin of epithelial ovarian cancer (EOC) based on a ualistic model of carcinogenesis that divides EOC into two broad categories designated Types I and II which are characterized by specific mutations, including KRAS, BRAF, ERBB2, CTNNB1, PTEN PIK3CA, ARID1A, and PPPR1A, which target specific cell signaling pathways. Type 1 tumors rarely harbor TP53. type I tumors are relatively genetically stable and typically display a variety of somatic sequence mutations that include KRAS, BRAF, PTEN, PIK3CA CTNNB1 (the gene encoding beta catenin), ARID1A and PPP2R1A but very rarely TP53 . The cancer stem cell (CSC) hypothesis postulates that the tumorigenic potential of CSCs is confined to a very small subset of tumor cells and is defined by their ability to self-renew and differentiate leading to the formation of a tumor mass. Potential protein biomarker miRNA, are promising biomarkers as they are remarkably stable to allow isolation and analysis from tissues and from blood in which they can be found as free circulating nucleic acids and in mononuclear cells. Recently, genomic anaylsis have identified biomarkers and potential therapeutic targets for ovarian cancer namely, FGF18 which plays an active role in controlling migration, invasion, and tumorigenicity of ovarian cancer cells through NF-κB activation, which increased the production of oncogenic cytokines and chemokines. This review summarizes update information on epithelial ovarian cancers and point out to the most recent ongoing research.

Keywords: epithelial ovarian cancers, somatic sequence mutations, cancer stem cell (CSC), potential protein, biomarker, genomic analysis, FGF18 biomarker

Procedia PDF Downloads 380
597 Frequency of Tube Feeding in Aboriginal and Non-aboriginal Head and Neck Cancer Patients and the Impact on Relapse and Survival Outcomes

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancer and treatments are known for their profound effect on nutrition and tube feeding is a common requirement to maintain nutrition. Aim: We aimed to evaluate the frequency of tube feeding in Aboriginal and non-Aboriginal patients, and to examine the relapse and survival outcomes in patients who require enteral tube feeding. Methods: We performed a retrospective cohort analysis of 320 head and neck cancer patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by site, histology, rurality, and age. Data collected included patient demographics, tumour features, treatment details, and cancer and survival outcomes. Results: Aboriginal and non-Aboriginal patients required feeding tubes at similar rates (42.5% vs 46.2% respectively), however Aboriginal patients were far more likely to fail to return to oral nutrition, with 26.3% requiring long-term tube feeding versus only 15% of non-Aboriginal patients. In the overall study population, 27.5% required short-term tube feeding, 17.8% required long-term enteral tube nutrition, and 45.3% of patients did not have a feeding tube at any point. Relapse was more common in patients who required tube feeding, with relapses in 42.1% of the patients requiring long-term tube feeding, 31.8% in those requiring a short-term tube, versus 18.9% in the ‘no tube’ group. Survival outcomes for patients who required a long-term tube were also significantly poorer when compared to patients who only required a short-term tube, or not at all. Long-term tube-requiring patients were half as likely to survive (29.8%) compared to patients requiring a short-term tube (62.5%) or no tube at all (63.5%). Patients requiring a long-term tube were twice as likely to die with active disease (59.6%) as patients with no tube (28%), or a short term tube (33%). This may suggest an increased relapse risk in patients who require long-term feeding, due to consequences of malnutrition on cancer and treatment outcomes, although may simply reflect that patients with recurrent disease were more likely to have longer-term swallowing dysfunction due to recurrent disease and salvage treatments. Interestingly long-term tube patients were also more likely to die with no active disease (10.5%) (compared with short-term tube requiring patients (4.6%), or patients with no tube (8%)), which is likely reflective of the increased mortality associated with long-term aspiration and malnutrition issues. Conclusions: Requirement for tube feeding was associated with a higher rate of cancer relapse, and in particular, long-term tube feeding was associated with a higher likelihood of dying from head and neck cancer, but also a higher risk of dying from other causes without cancer relapse. This data reflects the complex effect of head and neck cancer and its treatments on swallowing and nutrition, and ultimately, the effects of malnutrition, swallowing dysfunction, and aspiration on overall cancer and survival outcomes. Tube feeding was seen at similar rates in Aboriginal and non-Aboriginal patient, however failure to return to oral intake with a requirement for a long-term feeding tube was seen far more commonly in the Aboriginal population.

Keywords: head and neck cancer, enteral tube feeding, malnutrition, survival, relapse, aboriginal patients

Procedia PDF Downloads 103
596 A Study on Unplanned Settlement in Kabul City

Authors: Samir Ranjbar, Nasrullah Istanekzai

Abstract:

According to a report published in The Guardian, Kabul, the capital city of Afghanistan is the fifth fastest growing city in the world, whose population has increased fourfold since 2001 from 1.2 million to 4.8 million people. The main reason for this increment is identified as the return of Afghans migrated during the civil war. In addition to the return of immigrants, a steep economic growth due to foreign assistance in last decade creating lots of job opportunities in Kabul resulted in the attraction of individuals from the neighboring provinces as well. However, the development of urban facilities such as water supply system, housing transportation and waste management systems has yet to catch up with this rapid increase in population. Since Kabul city has developed traditionally and municipal governance had very limited capacity to implement municipal bylaws. As an unwanted consequence of this growth 70% of Kabul citizens contributed to developing informal settlement for which we can say that around three million people living in informally settled areas, lacking the very vital social and physical infrastructures of livelihood. This research focuses on a region with 30 ha area and 2100 people residents in the center of Kabul city. A comprehensive land readjustment concept plan has been formulated for this area. Through this concept plan, physical and social infrastructure has been demonstrated and analyzed. Findings of this paper propose a solution for the problems of this unplanned area in Kabul which is readjusting of unplanned area by a self-supporting process. This process does not need governmental budget and can be applied by government, private sectors and landowner associations. Furthermore, by implementing the Land Readjustment process, conceptual plans can be built for unplanned areas, maximum facilities can be brought to the residents’ urban life, improve the environment for the users’ benefit, promote the culture and sense of cooperation, participation and coexistence in the mind of people, improving the transport system, improvement in economic status (the value of land increases due to infrastructure availability and land legalization). In addition to all these benefits for the public, we can raise the revenue of government by collecting the taxes from landowners. This process is implemented in most of countries of the world, it was implemented for the first time in Germany and after that in most cities of Japan as well, and is known as one of the effective processes for infrastructural development. To sum up, the notable characteristic of the Land readjustment process is that it works on the concept of mutual interest in which both landowners and the government take advantage. However, in this process, the engagement of community is very important and without public cooperation, this process can face the failure.

Keywords: land readjustment, informal settlement, Kabul, Afghanistan

Procedia PDF Downloads 254
595 Serological Evidence of Brucella spp, Coxiella burnetti, Chlamydophila abortus, and Toxoplasma gondii Infections in Sheep and Goat Herds in the United Arab Emirates

Authors: Nabeeha Hassan Abdel Jalil, Robert Barigye, Hamda Al Alawi, Afra Al Dhaheri, Fatma Graiban Al Muhairi, Maryam Al Khateri, Nouf Al Alalawi, Susan Olet, Khaja Mohteshamuddin, Ahmad Al Aiyan, Mohamed Elfatih Hamad

Abstract:

A serological survey was carried out to determine the seroprevalence of Brucella spp, Coxiella burnetii, Chlamydophila abortus, and Toxoplasma gondii in sheep and goat herds in the UAE. A total of 915 blood samples [n= 222, [sheep]; n= 215, [goats]) were collected from livestock farms in the Emirates of Abu Dhabi, Dubai, Sharjah and Ras Al-Khaimah (RAK). An additional 478 samples (n= 244, [sheep]; n= 234, (goats]) were collected from the Al Ain livestock central market and tested by indirect ELISA for pathogen-specific antibodies with the Brucella antibodies being further corroborated by the Rose-Bengal agglutination test. Seropositivity for the four pathogens is variably documented in sheep and goats from the study area. Respectively, the overall livestock farm prevalence rates for Brucella spp, C. burnetii, C. abortus, and T. gondii were 2.7%, 27.9%, 8.1%, and 16.7% for sheep, and 0.0%, 31.6%, 9.3%, and 5.1% for goats. Additionally, the seroprevalence rates Brucella spp, C. burnetii, C. abortus, and T. gondii in samples from the livestock market were 7.4%, 21.7%, 16.4%, and 7.0% for sheep, and 0.9%, 32.5%, 19.2%, and 11.1% for goats respectively. Overall, sheep had 12.59 more chances than goats of testing seropositive for Brucella spp (OR, 12.59 [95% CI 2.96-53.6]) but less likely to be positive for C. burnetii-antibodies (OR, 0.73 [95% CI 0.54-0.97]). Notably, the differences in the seroprevalence rates of C. abortus and T. gondii in sheep and goats were not statistically significant (p > 0.0500). The present data indicate that all the four study pathogens are present in sheep and goat populations in the UAE where coxiellosis is apparently the most seroprevalent followed by chlamydophilosis, toxoplasmosis, and brucellosis. While sheep from the livestock market were more likely than those from farms to be Brucella-seropositive than those, the overall exposure risk of C. burnetii appears to be greater for goats than sheep. As more animals from the livestock market were more likely to be seropositive to Chlamydophila spp, it is possible that under the UAE animal production conditions, at least, coxiellosis and chlamydophilosis are more likely to increase the culling rate of domesticated small ruminants than toxoplasmosis and brucellosis. While anecdotal reports have previously insinuated that brucellosis may be a significant animal health risk in the UAE, the present data suggest C. burnetii, C. abortus and T. gondii to be more significant pathogens of sheep and goats in the country. Despite this possibility, the extent to which these pathogens may nationally be contributing to reproductive failure in sheep and goat herds is not known and needs to be investigated. Potentially, these agents may also carry a potentially zoonotic risk that needs to be investigated in risk groups like farm workers, and slaughter house personnel. An ongoing study is evaluating the seroprevalence of bovine coxiellosis in the Emirate of Abu Dhabi and the data thereof will further elucidate on the broader epidemiological dynamics of the disease in the national herd.

Keywords: Brucella spp, Chlamydophila abortus, goat, sheep, Toxoplasma gondii, UAE

Procedia PDF Downloads 206
594 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 343
593 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 356
592 Armed Forces Special Powers Act and Human Rights in Nagaland

Authors: Khrukulu Khusoh

Abstract:

The strategies and tactics used by governments throughout the world to counter terrorism and insurgency over the past few decades include the declaration of states of siege or martial law, enactment of anti-terrorist legislation and strengthening of judicial powers. Some of these measures taken have been more successful than the other, but some have proved counterproductive, alienating the public from the authorities and further polarizing an already fractured political environment. Such cases of alienation and polarization can be seen in the northeastern states of India. The Armed Forces (Special Powers) Act which was introduced to curb insurgency in the remote jungles of the far-flung areas has remained a telling tale of agony in the north east India. Grievous trauma to humans through encounter killings, custodial deaths, unwarranted torture, exploitation of women and children in several ways have been reported in Nagaland, Manipur and other northeastern states where the Indian army has been exercising powers under the Armed Forces (Special Powers) Act. While terrorism and the insurgency are destructive of human rights, counter-terrorism does not necessarily restore and safeguard human rights. This special law has not proven effective particularly in dealing with terrorism and insurgency. The insurgency has persisted in the state of Nagaland even after sixty years notwithstanding the presence of a good number of special laws. There is a need to fight elements that threaten the security of a nation, but the methods chosen should be measured, otherwise the fight is lost. There has been no review on the effectiveness or failure of the act to realize its intended purpose. Nor was there any attempt on the part of the state to critically look at the violation of rights of innocent citizens by the state agencies. The Indian state keeps enacting laws, but none of these could be effectively applied as there was the absence of clarity of purpose. Therefore, every new law which has been enacted time and again to deal with security threats failed to bring any solution for the last six decades. The Indian state resorts to measures which are actually not giving anything in terms of strategic benefits but are short-term victories that might result in long-term tragedies. Therefore, right thinking citizens and human rights activists across the country feel that introduction of Armed Forces (Special Powers) Act was as much violation of human rights and its continuation is undesirable. What worried everyone is the arbitrary use, or rather misuse of power by the Indian armed forces particularly against the weaker sections of the society, including women. After having being subjected to indiscriminate abuse of that law, people of the north-east India have been demanding its revocation for a long time. The present paper attempts to critically examine the violation of human rights under Armed Forces (Special Powers) Act. It also attempts to bring out the impact of Armed Forces (Special Powers) Act on the Naga people.

Keywords: armed forces, insurgency, special laws, violence

Procedia PDF Downloads 497
591 Levels of Heavy Metals and Arsenic in Sediment and in Clarias Gariepinus, of Lake Ngami

Authors: Nashaat Mazrui, Oarabile Mogobe, Barbara Ngwenya, Ketlhatlogile Mosepele, Mangaliso Gondwe

Abstract:

Over the last several decades, the world has seen a rapid increase in activities such as deforestation, agriculture, and energy use. Subsequently, trace elements are being deposited into our water bodies, where they can accumulate to toxic levels in aquatic organisms and can be transferred to humans through fish consumption. Thus, though fish is a good source of essential minerals and omega-3 fatty acids, it can also be a source of toxic elements. Monitoring trace elements in fish is important for the proper management of aquatic systems and the protection of human health. The aim of this study was to determine concentrations of trace elements in sediment and muscle tissues of Clarias gariepinus at Lake Ngami, in the Okavango Delta in northern Botswana, during low floods. The fish were bought from local fishermen, and samples of muscle tissue were acid-digested and analyzed for iron, zinc, copper, manganese, molybdenum, nickel, chromium, cadmium, lead, and arsenic using inductively coupled plasma optical emission spectroscopy (ICP-OES). Sediment samples were also collected and analyzed for the elements and for organic matter content. Results show that in all samples, iron was found in the greatest amount while cadmium was below the detection limit. Generally, the concentrations of elements in sediment were higher than in fish except for zinc and arsenic. While the concentration of zinc was similar in the two media, arsenic was almost 3 times higher in fish than sediment. To evaluate the risk to human health from fish consumption, the target hazard quotient (THQ) and cancer risk for an average adult in Botswana, sub-Saharan Africa, and riparian communities in the Okavango Delta was calculated for each element. All elements were found to be well below regulatory limits and do not pose a threat to human health except arsenic. The results suggest that other benthic feeding fish species could potentially have high arsenic levels too. This has serious implications for human health, especially riparian households to whom fish is a key component of food and nutrition security.

Keywords: Arsenic, African sharp tooth cat fish, Okavango delta, trace elements

Procedia PDF Downloads 193
590 Digital Immunity System for Healthcare Data Security

Authors: Nihar Bheda

Abstract:

Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.

Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology

Procedia PDF Downloads 69
589 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School

Authors: Shofiayuningtyas Luftiani

Abstract:

Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.

Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis

Procedia PDF Downloads 183
588 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)

Authors: Sultan Ben Jaber

Abstract:

Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.

Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor

Procedia PDF Downloads 70
587 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions

Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira

Abstract:

Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).

Keywords: change management, technology acceptance model, organizational change, health and safety

Procedia PDF Downloads 46
586 Rational Approach to Analysis and Construction of Curved Composite Box Girders in Bridges

Authors: Dongming Feng, Fangyin Zhang, Liling Cao

Abstract:

Horizontally curved steel-concrete composite box girders are extensively used in highway bridges. They consist of reinforced concrete deck on top of prefabricated steel box section beam which exhibits a high torsional rigidity to resist torsional effects induced by the curved structural geometry. This type of structural system is often constructed in two stages. The composite section will take the tension mainly by the steel box and, the compression by the concrete deck. The steel girders are delivered in large pre-fabricated U-shaped sections that are designed for ease of construction. They are then erected on site and overlaid by cast-in-place reinforced concrete deck. The functionality of the composite section is not achieved until the closed section is formed by fully cured concrete. Since this kind of composite section is built in two stages, the erection of the open steel box presents some challenges to contractors. When the reinforced concrete slab is cast-in-place, special care should be taken on bracings that can prevent the open U-shaped steel box from global and local buckling. In the case of multiple steel boxes, the design detailing should pay enough attention to the installation requirement of the bracings connecting adjacent steel boxes to prevent the global buckling. The slope in transverse direction and grade in longitudinal direction will result in some local deformation of the steel boxes that affect the connection of the bracings. During the design phase, it is common for engineers to model the curved composite box girder using one-dimensional beam elements. This is adequate to analyze the global behavior, however, it is unable to capture the local deformation which affects the installation of the field bracing connection. The presence of the local deformation may become a critical component to control the construction tolerance, and overlooking this deformation will produce inadequate structural details that eventually cause misalignment in field and erection failure. This paper will briefly describe the construction issues we encountered in real structures, investigate the difference between beam element modeling and shell/solid element modeling, and their impact on the different construction stages. P-delta effect due to the slope and curvature of the composite box girder is analyzed, and the secondary deformation is compared to the first-order response and evaluated for its impact on installation of lateral bracings. The paper will discuss the rational approach to prepare construction documents and recommendations are made on the communications between engineers, erectors, and fabricators to smooth out construction process.

Keywords: buckling, curved composite box girder, stage construction, structural detailing

Procedia PDF Downloads 122
585 The Investigation of Endogenous Intoxication and Lipid Peroxidation in Patients with Giardiasis Before and After Treatment

Authors: R. H. Begaydarova, B. Zh. Kultanov, B. T. Esilbaeva, G. E. Nasakaeva, Y. Yukhnevich, G. K. Alshynbekova, A. E. Dyusembaeva

Abstract:

Background: The level of middle molecules of peptides (MMP) allows to evaluate the severity and prognosis of the disease and is a criterion for the effectiveness of the treatment. The detection the products of lipidperoxidation cascade, such as conjugated dienes, malondialdehyde in biological material, has an important role in the development of pathogenesis, the diagnosis and prognosis in different parasitic diseases. Purpose of the study was to evaluate the state of endogenous intoxication and indicators of lipid peroxidation in patients with giardiasis before and after treatment. Materials and methods: Endogenous intoxication was evaluated in patients with giardiasis in the level of middle molecules of peptides (MMP) in the blood. The amount of MMP and products of lipid peroxidation were determined in the blood of 198 patients with giardiasis, 129 of them were women (65%), 69 were men (35%). The MMP level was detected for comparison in the blood of 84 healthy volunteers. The lipid peroxidation were determined in 40 healthy men and women without giardiasis and history of chronic diseases. Data were processed by conventional methods of variation statistics, we calculated the arithmetic mean (M) and standard dispersion (m). t-test (t) was used to assess differences. Results: The level of MMP in the blood was significantly higher in patients with giardiasis in comparison with group of healthy men and women. MMP concentration in the blood of women with Giardia was 2.5 times greater than that of the comparison groups of women. The level of MMP exceeds more than 6 times in men with giardiasis. The decrease in the intensity of endogenous intoxication was two weeks after antigiardia therapy, both men and women. According to the study, a statistically significant increase in the level of all the studied parameters lipid peroxidation cascade was observed in the blood of men with giardiasis, with the exception of the total primary production (NGN). The treatment of giardiasis helped to stabilize the level of almost all metabolites of lipid peroxidation cascade. The exception was level of malondialdehyde, it was significantly elevated to compare with the control group and after treatment. Conclusion: Thus, the MMP level was significantly higher in blood of patients with giardiasis than in comparison group. This is evidence of severe endogenous intoxication caused by giardia infection. The accumulation of primary and secondary products of lipid peroxidation was observed in the blood of men and women. These processes tend to be more active in men than in women. Antigiardiasis therapy contributed to the normalization of almost all the studied indicators of lipid peroxidation in the blood of participants, except the level malondialdehyde in the blood of men.

Keywords: enzymes of antioxidant protection, giardiasis, blood, treatment

Procedia PDF Downloads 239
584 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)

Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim

Abstract:

Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).

Keywords: pineapple, diagnosis, virus, NDVI

Procedia PDF Downloads 793
583 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein

Authors: Priyal Chikhaliwala, Sudeshna Chandra

Abstract:

Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.

Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles

Procedia PDF Downloads 136
582 Dynamics Pattern of Land Use and Land Cover Change and Its Driving Factors Based on a Cellular Automata Markov Model: A Case Study at Ibb Governorate, Yemen

Authors: Abdulkarem Qasem Dammag, Basema Qasim Dammag, Jian Dai

Abstract:

Change in Land use and Land cover (LU/LC) has a profound impact on the area's natural, economic, and ecological development, and the search for drivers of land cover change is one of the fundamental issues of LU/LC change. The study aimed to assess the temporal and Spatio-temporal dynamics of LU/LC in the past and to predict the future using Landsat images by exploring the characteristics of different LU/LC types. Spatio-temporal patterns of LU/LC change in Ibb Governorate, Yemen, were analyzed based on RS and GIS from 1990, 2005, and 2020. A socioeconomic survey and key informant interviews were used to assess potential drivers of LU/LC. The results showed that from 1990 to 2020, the total area of vegetation land decreased by 5.3%, while the area of barren land, grassland, built-up area, and waterbody increased by 2.7%, 1.6%, 1.04%, and 0.06%, respectively. Based on socio-economic surveys and key informant interviews, natural factors had a significant and long-term impact on land change. In contrast, site construction and socio-economic factors were the main driving forces affecting land change in a short time scale. The analysis results have been linked to the CA-Markov Land Use simulation and forecasting model for the years 2035 and 2050. The simulation results revealed from the period 2020 to 2050, the trend of dynamic changes in land use, where the total area of barren land decreased by 7.0% and grassland by 0.2%, while the vegetation land, built-up area, and waterbody increased by 4.6%, 2.6%, and 0.1 %, respectively. Overall, these findings provide LULC's past and future trends and identify drivers, which can play an important role in sustainable land use planning and management by balancing and coordinating urban growth and land use and can also be used at the regional level in different levels to provide as a reference. In addition, the results provide scientific guidance to government departments and local decision-makers in future land-use planning through dynamic monitoring of LU/LC change.

Keywords: LU/LC change, CA-Markov model, driving forces, change detection, LU/LC change simulation

Procedia PDF Downloads 64