Search results for: nonuniform signal processing
1347 Real-Time Radiological Monitoring of the Atmosphere Using an Autonomous Aerosol Sampler
Authors: Miroslav Hyza, Petr Rulik, Vojtech Bednar, Jan Sury
Abstract:
An early and reliable detection of an increased radioactivity level in the atmosphere is one of the key aspects of atmospheric radiological monitoring. Although the standard laboratory procedures provide detection limits as low as few µBq/m³, their major drawback is the delayed result reporting: typically a few days. This issue is the main objective of the HAMRAD project, which gave rise to a prototype of an autonomous monitoring device. It is based on the idea of sequential aerosol sampling using a carrousel sample changer combined with a gamma-ray spectrometer. In our hardware configuration, the air is drawn through a filter positioned on the carrousel so that it could be rotated into the measuring position after a preset sampling interval. Filter analysis is performed via a 50% HPGe detector inside an 8.5cm lead shielding. The spectrometer output signal is then analyzed using DSP electronics and Gamwin software with preset nuclide libraries and other analysis parameters. After the counting, the filter is placed into a storage bin with a capacity of 250 filters so that the device can run autonomously for several months depending on the preset sampling frequency. The device is connected to a central server via GPRS/GSM where the user can view monitoring data including raw spectra and technological data describing the state of the device. All operating parameters can be remotely adjusted through a simple GUI. The flow rate is continuously adjustable up to 10 m³/h. The main challenge in spectrum analysis is the natural background subtraction. As detection limits are heavily influenced by the deposited activity of radon decay products and the measurement time is fixed, there must exist an optimal sample decay time (delayed spectrum acquisition). To solve this problem, we adopted a simple procedure based on sequential spectrum acquisition and optimal partial spectral sum with respect to the detection limits for a particular radionuclide. The prototyped device proved to be able to detect atmospheric contamination at the level of mBq/m³ per an 8h sampling.Keywords: aerosols, atmosphere, atmospheric radioactivity monitoring, autonomous sampler
Procedia PDF Downloads 1501346 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.Keywords: intrauterine death, maceration, postmortem MRI, stillborn
Procedia PDF Downloads 1251345 Artificial Intelligence in Art and Other Sectors: Selected Aspects of Mutual Impact
Authors: Justyna Minkiewicz
Abstract:
Artificial Intelligence (AI) applied in the arts may influence the development of AI knowledge in other sectors and then also impact mutual collaboration with the artistic environment. Hence this collaboration may also impact the development of art projects. The paper will reflect the qualitative research outcomes based on in-depth (IDI) interviews within the marketing sector in Poland and desk research. Art is a reflection of the spirit of our times. Moreover, now we are experiencing a significant acceleration in the development of technologies and their use in various sectors. The leading technologies that contribute to the development of the economy, including the creative sector, embrace technologies such as artificial intelligence, blockchain, extended reality, voice processing, and virtual beings. Artificial intelligence is one of the leading technologies developed for several decades, which is currently reaching a high level of interest and use in various sectors. However, the conducted research has shown that there is still low awareness of artificial intelligence and its wide application in various sectors. The study will show how artists use artificial intelligence in their art projects and how it can be translated into practice within the business. At the same time, the paper will raise awareness of the need for businesses to be inspired by the artistic environment. The research proved that there is still a need to popularize knowledge about this technology which is crucial for many sectors. Art projects are tools to develop knowledge and awareness of society and also various sectors. At the same time, artists may benefit from such collaboration. The paper will include selected aspects of mutual relations, areas of possible inspiration, and possible transfers of technological solutions. Those are AI applications in creative industries such as advertising and film, image recognition in art, and projects from different sectors.Keywords: artificial intelligence, business, art, creative industry, technology
Procedia PDF Downloads 1051344 Starchy Wastewater as Raw Material for Biohydrogen Production by Dark Fermentation: A Review
Authors: Tami A. Ulhiza, Noor I. M. Puad, Azlin S. Azmi, Mohd. I. A. Malek
Abstract:
High amount of chemical oxygen demand (COD) in starchy waste can be harmful to the environment. In common practice, starch processing wastewater is discharged to the river without proper treatment. However, starchy waste still contains complex sugars and organic acids. By the right pretreatment method, the complex sugar can be hydrolyzed into more readily digestible sugars which can be utilized to be converted into more valuable products. At the same time, the global demand of energy is inevitable. The continuous usage of fossil fuel as the main source of energy can lead to energy scarcity. Hydrogen is a renewable form of energy which can be an alternative energy in the future. Moreover, hydrogen is clean and carries the highest energy compared to other fuels. Biohydrogen produced from waste has significant advantages over chemical methods. One of the major problems in biohydrogen production is the raw material cost. The carbohydrate-rich starchy wastes such as tapioca, maize, wheat, potato, and sago wastes is a promising candidate to be used as a substrate in producing biohydrogen. The utilization of those wastes for biohydrogen production can provide cheap energy generation with simultaneous waste treatment. Therefore this paper aims to review variety source of starchy wastes that has been widely used to synthesize biohydrogen. The scope includes the source of waste, the performance in yielding hydrogen, the pretreatment method and the type of culture that is suitable for starchy waste.Keywords: biohydrogen, dark fermentation, renewable energy, starchy waste
Procedia PDF Downloads 2231343 Comparative Study on Sensory Profiles of Liquor from Different Dried Cocoa Beans
Authors: Khairul Bariah Sulaiman, Tajul Aris Yang
Abstract:
Malaysian dried cocoa beans have been reported to have low quality flavour and are often sold at discounted prices. Various efforts have been made to improve the Malaysian beans quality. Among these efforts is introduction of the shallow box fermentation technique and pulp preconditioned through pods storage. However, after nearly four decades of the effort was done, Malaysian cocoa farmers still received lower prices for their beans. So, this study was carried out in order to assess the flavour quality of dried cocoa beans produced by shallow box fermentation techniques, combination of shallow box fermentation with pods storage and compared to dried cocoa beans obtained from Ghana. A total of eight samples of dried cocoa was used in this study, which one of the samples was Ghanaian beans (coded with no.8), while the rest were Malaysian cocoa beans with different post-harvest processing (coded with no. 1, 2, 3, 4, 5, 6 and 7). Cocoa liquor was prepared from all samples in the prescribed techniques and sensory evaluation was carried out using Quantitative Descriptive Analysis (QDA) Method with 0-10 scale by Malaysian Cocoa Board trained panelist. Sensory evaluation showed that cocoa attributes for all cocoa liquors ranging from 3.5 to 5.3, whereas bitterness was ranging from 3.4 to 4.6 and astringent attribute ranging from 3.9 to 5.5, respectively. Meanwhile, all cocoa liquors were having acid or sourness attribute ranging from 1.6 to 3.6, respectively. In general cocoa liquor prepared from sample coded no 4 has almost similar flavour profile and no significantly different at p < 0.05 with Ghana, in term of most flavour attributes as compared to the other six samples.Keywords: cocoa beans, flavour, fermentation, shallow box, pods storage
Procedia PDF Downloads 3941342 Automated User Story Driven Approach for Web-Based Functional Testing
Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam
Abstract:
Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors. In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template. We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE. We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators. Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing
Procedia PDF Downloads 3871341 Functional Yoghurt Enriched with Microencapsulated Olive Leaves Extract Powder Using Polycaprolactone via Double Emulsion/Solvent Evaporation Technique
Authors: Tamer El-Messery, Teresa Sanchez-Moya, Ruben Lopez-Nicolas, Gaspar Ros, Esmat Aly
Abstract:
Olive leaves (OLs), the main by-product of the olive oil industry, have a considerable amount of phenolic compounds. The exploitation of these compounds represents the current trend in food processing. In this study, OLs polyphenols were microencapsulated with polycaprolactone (PCL) and utilized in formulating novel functional yoghurt. PCL-microcapsules were characterized by scanning electron microscopy, and Fourier transform infrared spectrometry analysis. Their total phenolic (TPC), total flavonoid (TFC) contents, and antioxidant activities (DPPH, FRAP, ABTS), and polyphenols bioaccessibility were measured after oral, gastric, and intestinal steps of in vitro digestion. The four yoghurt formulations (containing 0, 25, 50, and 75 mg of PCL-microsphere/100g yoghurt) were evaluated for their pH, acidity, syneresis viscosity, and color during storage. In vitro digestion significantly affected the phenolic composition in non-encapsulated extract while had a lower impact on encapsulated phenolics. Higher protection was provided for encapsulated OLs extract, and their higher release was observed at the intestinal phase. Yoghurt with PCL-microsphere had lower viscosity, syneresis, and color parameters, as compared to control yoghurt. Thus, OLs represent a valuable and cheap source of polyphenols which can be successfully applied, in microencapsulated form, to formulate functional yoghurt.Keywords: yoghurt quality attributes, olive leaves, phenolic and flavonoids compounds, antioxidant activity, polycaprolactone as microencapsulant
Procedia PDF Downloads 1421340 Light-Weight Network for Real-Time Pose Estimation
Authors: Jianghao Hu, Hongyu Wang
Abstract:
The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone
Procedia PDF Downloads 1541339 Substituted Thiazole Analogues as Anti-Tumor Agents
Authors: Menna Ewida, Dalal Abou El-Ella, Dina Lasheen, Huessin El-Subbagh
Abstract:
Introduction: Vascular Endothelial Growth Factor receptor (VEGF) is a signal protein produced by cells that stimulates vasculogenesis to create new blood vessels. VEGF family binds to three trans-membrane tyrosine kinase receptors,Dihydrofolate reductase (DHFR) is an enzyme of crucial importance in medicinal chemistry. DHFR catalyzes the reduction 7,8 dihydro-folate to tetrahydrofolate and intimately couples with thymidylate synthase which is a pivotal enzyme that catalysis the reductive methylation of deoxyuridine monophosphate (dUMP) to deoxythymidine monophosphate (dTMP) utilizing N5,N10-methylene tetrahydrofolate as a cofactor which functions as the source of the methyl group. Purpose: Novel substituted Thiazole agents were designed as DHFR and VEGF-TK inhibitors with increased synergistic activity and decreased side effects. Methods: Five series of compounds were designed with a rational that mimic the pharmacophoric features present in the reported active compounds that target DHFR & VEGFR. These molecules were docked against Methotrexate & Sorafenib as controls. An in silico ADMET study was also performed to validate the bioavailability of the newly designed compounds. The in silico molecular docking & ADMET study were also applied to the non-classical antifolates for comparison. The interaction energy comparable to that of MTX for DHFRI and Sorafenib for VEGF-TKI activity were recorded. Results: Compound 5 exhibited the highest interaction energy when docked against Sorafenib, While Compound 9 showed the highest interaction energy when docked against MTX with the perfect binding mode. Comparable results were also obtained for the ADMET study. Most of the compounds showed absorption within (95-99) zone which varies according to the type of substituents. Conclusions: The Substituted Thiazole Analogues could be a suitable template for antitumor drugs that possess enhanced bioavailability and act as DHFR and VEGF-TK inhibitors.Keywords: anti-tumor agents, DHFR, drug design, molecular modeling, VEGFR-TKIs
Procedia PDF Downloads 2361338 Business Continuity Risk Review for a Large Petrochemical Complex
Authors: Michel A. Thomet
Abstract:
A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF
Procedia PDF Downloads 2191337 Space Time Adaptive Algorithm in Bi-Static Passive Radar Systems for Clutter Mitigation
Authors: D. Venu, N. V. Koteswara Rao
Abstract:
Space – time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Since airborne passive radar systems utilize broadcast, navigation and excellent communication signals to perform various surveillance tasks and also has attracted significant interest from the distinct past, therefore the need of the hour is to have cost effective systems as compared to conventional active radar systems. Moreover, requirements of small number of secondary samples for effective clutter suppression in bi-static passive radar offer abundant illuminator resources for passive surveillance radar systems. This paper presents a framework for incorporating knowledge sources directly in the space-time beam former of airborne adaptive radars. STAP algorithm for clutter mitigation for passive bi-static radar has better quantitation of the reduction in sample size thereby amalgamating the earlier data bank with existing radar data sets. Also, we proposed a novel method to estimate the clutter matrix and perform STAP for efficient clutter suppression based on small sample size. Furthermore, the effectiveness of the proposed algorithm is verified using MATLAB simulations in order to validate STAP algorithm for passive bi-static radar. In conclusion, this study highlights the importance for various applications which augments traditional active radars using cost-effective measures.Keywords: bistatic radar, clutter, covariance matrix passive radar, STAP
Procedia PDF Downloads 2961336 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography
Authors: C. Yin, B. Zhang, Y. Liu, J. Cai
Abstract:
Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.Keywords: 3D, Airborne EM, forward modeling, topographic effect
Procedia PDF Downloads 3171335 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index
Authors: Todd Zhou, Mikhail Yurochkin
Abstract:
Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index
Procedia PDF Downloads 1241334 Bottleneck Modeling in Information Technology Service Management
Authors: Abhinay Puvvala, Veerendra Kumar Rai
Abstract:
A bottleneck situation arises when the outflow is lesser than the inflow in a pipe-like setup. A more practical interpretation of bottlenecks emphasizes on the realization of Service Level Objectives (SLOs) at given workloads. Our approach detects two key aspects of bottlenecks – when and where. To identify ‘when’ we continuously poll on certain key metrics such as resource utilization, processing time, request backlog and throughput at a system level. Further, when the slope of the expected sojourn time at a workload is greater than ‘K’ times the slope of expected sojourn time at the previous step of the workload while the workload is being gradually increased in discrete steps, a bottleneck situation arises. ‘K’ defines the threshold condition and is computed based on the system’s service level objectives. The second aspect of our approach is to identify the location of the bottleneck. In multi-tier systems with a complex network of layers, it is a challenging problem to locate bottleneck that affects the overall system performance. We stage the system by varying workload incrementally to draw a correlation between load increase and system performance to the point where Service Level Objectives are violated. During the staging process, multiple metrics are monitored at hardware and application levels. The correlations are drawn between metrics and the overall system performance. These correlations along with the Service Level Objectives are used to arrive at the threshold conditions for each of these metrics. Subsequently, the same method used to identify when a bottleneck occurs is used on metrics data with threshold conditions to locate bottlenecks.Keywords: bottleneck, workload, service level objectives (SLOs), throughput, system performance
Procedia PDF Downloads 2371333 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks
Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas
Abstract:
This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems
Procedia PDF Downloads 1341332 Development of Latent Fingerprints on Non-Porous Surfaces Recovered from Fresh and Sea Water
Authors: A. Somaya Madkour, B. Abeer sheta, C. Fatma Badr El Dine, D. Yasser Elwakeel, E. Nermine AbdAllah
Abstract:
Criminal offenders have a fundamental goal not to leave any traces at the crime scene. Some may suppose that items recovered underwater will have no forensic value, therefore, they try to destroy the traces by throwing items in water. These traces are subjected to the destructive environmental effects. This can represent a challenge for Forensic experts investigating finger marks. Accordingly, the present study was conducted to determine the optimal method for latent fingerprints development on non-porous surfaces submerged in aquatic environments at different time interval. The two factors analyzed in this study were the nature of aquatic environment and length of submerged time. In addition, the quality of developed finger marks depending on the used method was also assessed. Therefore, latent fingerprints were deposited on metallic, plastic and glass objects and submerged in fresh or sea water for one, two, and ten days. After recovery, the items were subjected to cyanoacrylate fuming, black powder and small particle reagent processing and the prints were examined. Each print was evaluated according to fingerprint quality assessment scale. The present study demonstrated that the duration of submersion affects the quality of finger marks; the longer the duration, the worse the quality.The best results of visualization were achieved using cyanoacrylate either in fresh or sea water. This study has also revealed that the exposure to sea water had more destructive influence on the quality of detected finger marks.Keywords: fingerprints, fresh water, sea, non-porous
Procedia PDF Downloads 4551331 IL4/IL13 STAT6 Mediated Macrophage Polarization During Acute and Chronic Pancreatitis
Authors: Hager Elsheikh, Juliane Glaubitz, Frank Ulrich Weiss, Matthias Sendler
Abstract:
Aim: Acute pancreatitis (AP) and chronic pancreatitis (CP) are both accompanied by a prominent immune response which influences the course of disease. Whereas during AP the pro-inflammatory immune response dominates, during CP a fibroinflammatory response regulates organ remodeling. The transcription factor signal transducer and activator of transcription 6 (STAT6) is a crucial part of the Type 2 immune response. Here we investigate the role of STAT6 in a mouse model of AP and CP. Material and Methods: AP was induced by hourly repetitive i.p. injections of caerulein (50µg/kg/bodyweight) in C57Bl/6 J and STAT6-/- mice. CP was induced by repetitive caerulein injections 6 times a day, 3 days a week over 4 weeks. Disease severity was evaluated by serum amylase/lipase measurement, H&E staining of pancreas. Pancreatic infiltrate was characterized by immunofluorescent labeling of CD68, CD206, CCR2, CD4 and CD8. Pancreas fibrosis was evaluated by Azan blue staining. qRT-PCR was performed of Arg1, Nos2, Il6, Il1b, Col3a, Socs3 and Ym1. Affymetrix chip array analyses were done to illustrate the IL4/IL13/STAT6 signaling in bone marrow derived macrophages. Results: AP severity is mitigated in STAT6-/- mice, as shown by decreased serum amylase and lipase, as well as histological damage. CP mice surprisingly showed only slightly reduced fibrosis of the pancreas. Also staining of CD206 a classical marker of alternatively activated macrophages showed no decrease of M2-like polarization in the absence of STAT6. In contrast, transcription profile analysis in BMDM showed complete blockade of the IL4/IL13 pathway in STAT6-/- animals. Conclusion: STAT6 signaling pathway is protective during AP and mitigates the pancreatic damage. During chronic pancreatitis the IL4/IL13 – STAT6 axisis involved in organ fibrogenesis. Notably, fibrosis is not dependent on a single signaling pathway, and alternative macrophage activation is also complex and involves different subclasses (M2a, M2b, M2c and M2d) which could be independent of the IL4/IL13 STAT6 axis.Keywords: chronic pancreatitis, macrophages, IL4/IL13, Type immune response
Procedia PDF Downloads 671330 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator
Authors: J. Ritonja
Abstract:
Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.Keywords: adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification
Procedia PDF Downloads 2471329 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)
Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo
Abstract:
Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop
Procedia PDF Downloads 4031328 Review of Microstructure, Mechanical and Corrosion Behavior of Aluminum Matrix Composite Reinforced with Agro/Industrial Waste Fabricated by Stir Casting Process
Authors: Mehari Kahsay, Krishna Murthy Kyathegowda, Temesgen Berhanu
Abstract:
Aluminum matrix composites have gained focus on research and industrial use, especially those not requiring extreme loading or thermal conditions, for the last few decades. Their relatively low cost, simple processing and attractive properties are the reasons for the widespread use of aluminum matrix composites in the manufacturing of automobiles, aircraft, military, and sports goods. In this article, the microstructure, mechanical, and corrosion behaviors of the aluminum metal matrix were reviewed, focusing on the stir casting fabrication process and usage of agro/industrial waste reinforcement particles. The results portrayed that mechanical properties like tensile strength, ultimate tensile strength, hardness, percentage of elongation, impact, and fracture toughness are highly dependent on the amount, kind, and size of reinforcing particles. Additionally, uniform distribution, wettability of reinforcement particles, and the porosity level of the resulting composite also affect the mechanical and corrosion behaviors of aluminum matrix composites. The two-step stir-casting process resulted in better wetting characteristics, a lower porosity level, and a uniform distribution of particles with proper handling of process parameters. On the other hand, the inconsistent and contradicting results on corrosion behavior regarding monolithic and hybrid aluminum matrix composites need further study.Keywords: microstructure, mechanical behavior, corrosion, aluminum matrix composite
Procedia PDF Downloads 731327 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal
Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha
Abstract:
Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit
Procedia PDF Downloads 4201326 A Recommender System for Job Seekers to Show up Companies Based on Their Psychometric Preferences and Company Sentiment Scores
Authors: A. Ashraff
Abstract:
The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems play a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment, it’s not being utilized exclusively. Recent statistics show an increase in staff turnover, which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements, and pay scale. Further investigations revealed that there are lacking guidance or support, which helps a job seeker find the company that will suit him best, and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision. In this paper, we propose an approach to study the available review data on IT companies (score their reviews based on user review sentiments) and gather information on job seekers, which includes their Psychometric evaluations. Then presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker. The theoretical approach, Algorithmic approach and the importance of such a system will be discussed in this paper.Keywords: psychometric tests, recommender systems, sentiment analysis, hybrid recommender systems
Procedia PDF Downloads 1071325 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection
Authors: S. Shankar Bharathi
Abstract:
Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision
Procedia PDF Downloads 4281324 Recycled Use of Solid Wastes in Building Material: A Review
Authors: Oriyomi M. Okeyinka, David A. Oloke, Jamal M. Khatib
Abstract:
Large quantities of solid wastes being generated worldwide from sources such as household, domestic, industrial, commercial and construction demolition activities, leads to environmental concerns. Utilization of these wastes in making building construction materials can reduce the magnitude of the associated problems. When these waste products are used in place of other conventional materials, natural resources and energy are preserved and expensive and/or potentially harmful waste disposal is avoided. Recycling which is regarded as the third most preferred waste disposal option, with its numerous environmental benefits, stand as a viable option to offset the environmental impact associated with the construction industry. This paper reviews the results of laboratory tests and important research findings, and the potential of using these wastes in building construction materials with focus on sustainable development. Research gaps, which includes; the need to develop standard mix design for solid waste based building materials; the need to develop energy efficient method of processing solid waste use in concrete; the need to study the actual behavior or performance of such building materials in practical application and the limited real life application of such building materials have also been identified. Therefore a research is being proposed to develop an environmentally friendly, lightweight building block from recycled waste paper, without the use of cement, and with properties suitable for use as walling unit. This proposed research intends to incorporate, laboratory experimentation and modeling to address the identified research gaps.Keywords: recycling, solid wastes, construction, building materials
Procedia PDF Downloads 3851323 Engineering Strategies Towards Improvement in Energy Storage Performance of Ceramic Capacitors for Pulsed Power Applications
Authors: Abdul Manan
Abstract:
The necessity for efficient and cost-effective energy storage devices to intelligently store the inconsistent energy output from modern renewable energy sources is peaked today. The scientific community is struggling to identify the appropriate material system for energy storage applications. Countless contributions by researchers worldwide have now helped us identify the possible snags and limitations associated with each material/method. Energy storage has attracted great attention for its use in portable electronic devices military field. Different devices, such as dielectric capacitors, supercapacitors, and batteries, are used for energy storage. Of these, dielectric capacitors have high energy output, a long life cycle, fast charging and discharging capabilities, work at high temperatures, and excellent fatigue resistance. The energy storage characteristics have been studied to be highly affected by various factors, such as grain size, optimized compositions, grain orientation, energy band gap, processing techniques, defect engineering, core-shell formation, interface engineering, electronegativity difference, the addition of additives, density, secondary phases, the difference of Pmax-Pr, sample thickness, area of the electrode, testing frequency, and AC/DC conditions. The data regarding these parameters/factors are scattered in the literature, and the aim of this study is to gather the data into a single paper that will be beneficial for new researchers in the field of interest. Furthermore, control over and optimizing these parameters will lead to enhancing the energy storage properties.Keywords: strategies, ceramics, energy storage, capacitors
Procedia PDF Downloads 781322 Effect of Manganese Doping on Ferrroelectric Properties of (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 Lead-Free Piezoceramic
Authors: Chongtham Jiten, Radhapiyari Laishram, K. Chandramani Singh
Abstract:
Alkaline niobate (Na0.5K0.5)NbO3 ceramic system has attracted major attention in view of its potential for replacing the highly toxic but superior lead zirconate titanate (PZT) system for piezoelectric applications. Recently, a more detailed study of this system reveals that the ferroelectric and piezoelectric properties are optimized in the Li- and V-modified system having the composition (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3. In the present work, we further study the pyroelectric behaviour of this composition along with another doped with Mn4+. So, (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 + x MnO2 (x = 0, and 0.01 wt. %) ceramic compositions were synthesized by conventional ceramic processing route. X-ray diffraction study reveals that both the undoped and Mn4+-doped ceramic samples prepared crystallize into a perovskite structure having orthorhombic symmetry. Dielectric study indicates that Mn4+ doping has little effect on both the Curie temperature (Tc) and tetragonal-orthorhombic phase transition temperature (Tot). The bulk density, room-temperature dielectric constant (εRT), and room-c The room-temperature coercive field (Ec) is observed to be lower in Mn4+ doped sample. The detailed analysis of the P-E hysteresis loops over the range of temperature from about room temperature to Tot points out that enhanced ferroelectric properties exist in this temperature range with better thermal stability for the Mn4+ doped ceramic. The study reveals that small traces of Mn4+ can modify (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 system so as to improve its ferroelectric properties with good thermal stability over a wide range of temperature.Keywords: ceramics, dielectric properties, ferroelectric properties, lead-free, sintering, thermal stability
Procedia PDF Downloads 2381321 Encapsulation and Protection of Bioactive Nutrients Based on Ligand-Binding Property of Milk Proteins
Authors: Hao Cheng, Yingzhou Ni, Amr M. Bakry, Li Liang
Abstract:
Functional foods containing bioactive nutrients offer benefits beyond basic nutrition and hence the possibility of delaying and preventing chronic diseases. However, many bioactive nutrients degrade rapidly under food processing and storage conditions. Encapsulation can be used to overcome these limitations. Food proteins have been widely used as carrier materials for the preparation of nano/micro-particles because of their ability to form gels and emulsions and to interact with polysaccharides. The mechanisms of interaction between bioactive nutrients and proteins must be understood in order to develop protein-based lipid-free delivery systems. Beta-lactoglobulin, a small globular protein in milk whey, exhibits an affinity to a wide range of compounds. Alfa-tocopherol, resveratrol and folic acid were respectively bound to the central cavity, the outer surface near Trp19–Arg124 and the hydrophobic pocket in the groove between the alfa-helix and the beta-barrel of the protein. Beta-lactoglobulin could thus bind the three bioactive nutrients simultaneously to form protein-multi-ligand complexes. Beta-casein, an intrinsically unstructured but major milk protein, could also interact with resveratrol and folic acid to form complexes. These results suggest the potential to develop milk-protein-based complex carrier systems for encapsulation of multiple bioactive nutrients for functional food application and also pharmaceutical and medical uses.Keywords: milk protein, bioactive nutrient, interaction, protection
Procedia PDF Downloads 4121320 Rational Memory Therapy: The Counselling Technique to Control Psychological and Psychosomatic Illnesses
Authors: Sachin Deshmukh
Abstract:
Mind and body synchronization occurs through memory and sensation production. Sensations are the guiding language of subconscious mind for conscious mind to take a proper action. Mind-mechanism is based upon memories collected so far since intrauterine life. There are three universal triggers for memory creation; they are persons, situations and objects. Memory is created as sensations experienced by special senses. Based upon experiencing comfort or discomfort, the triggers are categorized as safe or unsafe triggers. A memory comprises of ‘safe or unsafe feeling for triggers, and actions taken for that feeling’. Memories for triggers are created slowly, thoughtfully and consciously by the conscious mind, and archived in the subconscious mind for future references. Later on, similar triggers can come in contact with the individual. Subconscious mind uses these stored feelings to decide whether these triggers are safe or unsafe. It produces comfort or discomfort sensations as emotions accordingly and reacts in the same way as has been recorded in memory. Speed of sensing and processing the triggers, and reacting by subconscious mind is that of the speed of bioelectricity. Hence, formula for human emotions has been designed in this paper as follows: Emotion (Stress or Peace) = Trigger (Person or Situation or object) x Mass of feelings (stressful or peaceful) associated with the Trigger x Speed of Light². We also establish modern medical scientific facts about relationship between reflex activity and memory. This research further develops the ‘Rational Memory Therapy’ focusing on therapeutic feelings conversion techniques, for stress prevention and management.Keywords: memory, sensations, feelings, emotions, rational memory therapy
Procedia PDF Downloads 2551319 Multimodal Optimization of Density-Based Clustering Using Collective Animal Behavior Algorithm
Authors: Kristian Bautista, Ruben A. Idoy
Abstract:
A bio-inspired metaheuristic algorithm inspired by the theory of collective animal behavior (CAB) was integrated to density-based clustering modeled as multimodal optimization problem. The algorithm was tested on synthetic, Iris, Glass, Pima and Thyroid data sets in order to measure its effectiveness relative to CDE-based Clustering algorithm. Upon preliminary testing, it was found out that one of the parameter settings used was ineffective in performing clustering when applied to the algorithm prompting the researcher to do an investigation. It was revealed that fine tuning distance δ3 that determines the extent to which a given data point will be clustered helped improve the quality of cluster output. Even though the modification of distance δ3 significantly improved the solution quality and cluster output of the algorithm, results suggest that there is no difference between the population mean of the solutions obtained using the original and modified parameter setting for all data sets. This implies that using either the original or modified parameter setting will not have any effect towards obtaining the best global and local animal positions. Results also suggest that CDE-based clustering algorithm is better than CAB-density clustering algorithm for all data sets. Nevertheless, CAB-density clustering algorithm is still a good clustering algorithm because it has correctly identified the number of classes of some data sets more frequently in a thirty trial run with a much smaller standard deviation, a potential in clustering high dimensional data sets. Thus, the researcher recommends further investigation in the post-processing stage of the algorithm.Keywords: clustering, metaheuristics, collective animal behavior algorithm, density-based clustering, multimodal optimization
Procedia PDF Downloads 2311318 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution
Authors: Pitigalage Chamath Chandira Peiris
Abstract:
A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.Keywords: single image super resolution, computer vision, vision transformers, image restoration
Procedia PDF Downloads 105