Search results for: the important architectural complex of Wang-An Hua-Zhai settlement
15748 New Challenge: Reduction of Aflatoxin M1 Residues in Cow’s Milk by MilBond Dietary Hydrated Sodium Calcium Aluminosilicate (HSCAS) and Its Effect on Milk Composition
Authors: A. Aly Salwa, H. Diekmann, S. Hafiz Ragaa, DG Abo Elhassan
Abstract:
This study was aimed to evaluate the effect of Milbond (HSCAS) on aflatoxin M1 in artificially contaminated cows milk. Chemisorption compounds used in this experiment were MIlBond, hydrated sodium calcium aluminosilicate (HSCAS). Raw cow milk were artificially exposed to aflatoxin M1 in a concentration of 100 ppb) with addition of Nilbond at 0.5, 1, 2 and 3 % at room temperature for 30 minutes. Aflatoxin M1 was decreased more than 95% by HSCAS at 2%. Milk composition consist of protein, fat, lactose, solid non fat and total solid were affected by addition of some adsorbents were not significantly affected (p 0.05). Tthis method did not involve degrading the toxin, milk may be free from toxin degradation products and is safe for consumption. In addition, the added material may be easily separated from milk after the substance adsorbs the toxin. Thus, this method should be developed by further researches for determining effects of these compounds on functional properties of milk. The ability of hydrated sodium calcium aluminosilicate to prevent or reduce the level of aflatoxin MI residues in milk is critically needed. This finding has important implications, because milk is ultimately consumed by humans and animals, and the reduction of aflatoxin contamination in the milk could have an important impact on their health.Keywords: aflatoxin M1, Hydrated sodium calcium aluminium silicate, detoxification, raw cow milk
Procedia PDF Downloads 43715747 Integration of Load Introduction Elements into Fabrics
Authors: Jan Schwennen, Harlad Schmid, Juergen Fleischer
Abstract:
Lightweight design plays an important role in the automotive industry. Especially the combination of metal and CFRP shows great potential for future vehicle concepts. This requires joining technologies that are cost-efficient and appropriate for the materials involved. Previous investigations show that integrating load introduction elements during CFRP part manufacturing offers great advantages in mechanical performance. However, it is not yet clear how to integrate the elements in an automated process without harming the fiber structure. In this paper, a test rig is build up to investigate the effect of different parameters during insert integration experimentally. After a short description of the experimental equipment, preliminary tests are performed to determine a set of important process parameters. Based on that, the planning of design of experiments is given. The interpretation and evaluation of the test results show that with a minimization of the insert diameter and the peak angle less harm on the fiber structure can be achieved. Furthermore, a maximization of the die diameter above the insert shows a positive effect on the fiber structure. At the end of this paper, a theoretical description of alternative peak shaping is given and then the results get validated on the basis of an industrial reference part.Keywords: CFRP, fabrics, insert, load introduction element, integration
Procedia PDF Downloads 24415746 Improving Cleanability by Changing Fish Processing Equipment Design
Authors: Lars A. L. Giske, Ola J. Mork, Emil Bjoerlykhaug
Abstract:
The design of fish processing equipment greatly impacts how easy the cleaning process for the equipment is. This is a critical issue in fish processing, as cleaning of fish processing equipment is a task that is both costly and time consuming, in addition to being very important with regards to product quality. Even more, poorly cleaned equipment could in the worst case lead to contaminated product from which consumers could get ill. This paper will elucidate how equipment design changes could improve the work for the cleaners and saving money for the fish processing facilities by looking at a case for product design improvements. The design of fish processing equipment largely determines how easy it is to clean. “Design for cleaning” is the new hype in the industry and equipment where the ease of cleaning is prioritized gets a competitive advantage over equipment in which design for cleaning has not been prioritized. Design for cleaning is an important research area for equipment manufacturers. SeaSide AS is doing continuously improvements in the design of their products in order to gain a competitive advantage. The focus in this paper will be conveyors for internal logistic and a product called the “electro stunner” will be studied with regards to “Design for cleaning”. Often together with SeaSide’s customers, ideas for new products or product improvements are sketched out, 3D-modelled, discussed, revised, built and delivered. Feedback from the customers is taken into consideration, and the product design is revised once again. This loop was repeated multiple times, and led to new product designs. The new designs sometimes also cause the manufacturing processes to change (as in going from bolted to welded connections). Customers report back that the concrete changes applied to products by SeaSide has resulted in overall more easily cleaned equipment. These changes include, but are not limited to; welded connections (opposed to bolted connections), gaps between contact faces, opening up structures to allow cleaning “inside” equipment, and generally avoiding areas in which humidity and water may gather and build up. This is important, as there will always be bacteria in the water which will grow if the area never dries up. The work of creating more cleanable design is still ongoing, and will “never” be finished as new designs and new equipment will have their own challenges.Keywords: cleaning, design, equipment, fish processing, innovation
Procedia PDF Downloads 23915745 The Role of Social Media on Political Behaviour in Malaysia
Authors: Ismail Sualman, Mohd Khairuddin Othman
Abstract:
General Election has been the backbone of democracy that permits people to choose their representatives as they deem fit. The support preferences of the voter differ from one to another, particularly in a plural society like Malaysia. The turning up of high numbers of young voters during the Malaysia 14th General Election has been said to have been caused by social media including Facebook, Twitter, WhatsApp, Instagram, YouTube and Telegram, WeChat and SMS/MMs. It has been observed that, besides using social media as an interaction tool among social friends, it is also an important source of information to know about issues, politics and politicians. This paper exhibits the role of social media in providing political information to young voters, before an election and during the election campaign. This study examines how this information is being translated into election support. A total of 799 Malay young respondents in Selangor have been surveyed and interviewed. This study revealed that social media has become the source of political information among Malay young voters. This research suggested that social media had a significant effect on the support during the election. Social media plays an important role in carrying information such as current issues, voting trends, candidate imagery and matters that may influence the view of young voters. The information obtained from social media has been translated into a voting decision.Keywords: social media, political behaviour, voters’ choice, election.
Procedia PDF Downloads 14915744 Correlation of Material Mechanical Characteristics Obtained by Means of Standardized and Miniature Test Specimens
Authors: Vaclav Mentl, P. Zlabek, J. Volak
Abstract:
New methods of mechanical testing were developed recently that are based on making use of miniature test specimens (e.g. Small Punch Test). The most important advantage of these method is the nearly non-destructive withdrawal of test material and small size of test specimen what is interesting in cases of remaining lifetime assessment when a sufficient volume of the representative material cannot be withdrawn of the component in question. In opposite, the most important disadvantage of such methods stems from the necessity to correlate test results with the results of standardised test procedures and to build up a database of material data in service. The correlations among the miniature test specimen data and the results of standardised tests are necessary. The paper describes the results of fatigue tests performed on miniature tests specimens in comparison with traditional fatigue tests for several steels applied in power producing industry. Special miniature test specimens fixtures were designed and manufactured for the purposes of fatigue testing at the Zwick/Roell 10HPF5100 testing machine. The miniature test specimens were produced of the traditional test specimens. Seven different steels were fatigue loaded (R = 0.1) at room temperature.Keywords: mechanical properties, miniature test specimens, correlations, small punch test, micro-tensile test, mini-charpy impact test
Procedia PDF Downloads 54215743 Improving the Electrical Conductivity of Epoxy Coating Using Carbon Nanotube by Electrodeposition Method
Authors: Mahla Zabet, Navid Zanganeh, Hafez Balavi, Farbod Sharif
Abstract:
Electrodeposition is a method for applying coatings with uniform thickness on complex objects. A conductive surface can be produced using the electrical current in this method. Carbon nanotubes are known to have high electrical conductivity and mechanical properties. In this report, NH2-multiwalled carbon nanotubes (MWCNTs) were used in epoxy resin with different weight percent. The weight percent of incorporated MWCNTS into the matrix was changed in the range of 0.6-3.6 wt% to obtain a series of electrocoatings. The electrocoats were then applied on steel substrates by a cathodic electrodeposition technique. Scanning electron microscopy (SEM) and optical microscopy were used to characterize the electrocoated films. The results illustrated the increase in conductivity by increasing of MWCNT load. However, at the percolation threshold, throwing power was dropped with increase in recoating ability.Keywords: electrodeposition, carbon nanotube, electrical conductivity, throwing power
Procedia PDF Downloads 42115742 Association of Non Synonymous SNP in DC-SIGN Receptor Gene with Tuberculosis (Tb)
Authors: Saima Suleman, Kalsoom Sughra, Naeem Mahmood Ashraf
Abstract:
Mycobacterium tuberculosis is a communicable chronic illness. This disease is being highly focused by researchers as it is present approximately in one third of world population either in active or latent form. The genetic makeup of a person plays an important part in producing immunity against disease. And one important factor association is single nucleotide polymorphism of relevant gene. In this study, we have studied association between single nucleotide polymorphism of CD-209 gene (encode DC-SIGN receptor) and patients of tuberculosis. Dry lab (in silico) and wet lab (RFLP) analysis have been carried out. GWAS catalogue and GEO database have been searched to find out previous association data. No association study has been found related to CD-209 nsSNPs but role of CD-209 in pulmonary tuberculosis have been addressed in GEO database.Therefore, CD-209 has been selected for this study. Different databases like ENSEMBLE and 1000 Genome Project has been used to retrieve SNP data in form of VCF file which is further submitted to different software to sort SNPs into benign and deleterious. Selected SNPs are further annotated by using 3-D modeling techniques using I-TASSER online software. Furthermore, selected nsSNPs were checked in Gujrat and Faisalabad population through RFLP analysis. In this study population two SNPs are found to be associated with tuberculosis while one nsSNP is not found to be associated with the disease.Keywords: association, CD209, DC-SIGN, tuberculosis
Procedia PDF Downloads 30915741 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)
Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen
Abstract:
Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity
Procedia PDF Downloads 12715740 Radar Signal Detection Using Neural Networks in Log-Normal Clutter for Multiple Targets Situations
Authors: Boudemagh Naime
Abstract:
Automatic radar detection requires some methods of adapting to variations in the background clutter in order to control their false alarm rate. The problem becomes more complicated in non-Gaussian environment. In fact, the conventional approach in real time applications requires a complex statistical modeling and much computational operations. To overcome these constraints, we propose another approach based on artificial neural network (ANN-CMLD-CFAR) using a Back Propagation (BP) training algorithm. The considered environment follows a log-normal distribution in the presence of multiple Rayleigh-targets. To evaluate the performances of the considered detector, several situations, such as scale parameter and the number of interferes targets, have been investigated. The simulation results show that the ANN-CMLD-CFAR processor outperforms the conventional statistical one.Keywords: radat detection, ANN-CMLD-CFAR, log-normal clutter, statistical modelling
Procedia PDF Downloads 36715739 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 34415738 Particle Observation in Secondary School Using a Student-Built Instrument: Design-Based Research on a STEM Sequence about Particle Physics
Authors: J.Pozuelo-Muñoz, E. Cascarosa-Salillas, C. Rodríguez-Casals, A. de Echave, E. Terrado-Sieso
Abstract:
This study focuses on the development, implementation, and evaluation of an instructional sequence aimed at 16–17-year-old students, involving the design and use of a cloud chamber—a device that allows observation of subatomic particles. The research addresses the limited presence of particle physics in Spanish secondary and high school curricula, a gap that restricts students' learning of advanced physics concepts and diminishes engagement with complex scientific topics. The primary goal of this project is to introduce particle physics in the classroom through a practical, interdisciplinary methodology that promotes autonomous learning and critical thinking. The methodology is framed within Design-Based Research (DBR), an approach that enables iterative and pragmatic development of educational resources. The research proceeded in several phases, beginning with the design of an experimental teaching sequence, followed by its implementation in high school classrooms. This sequence was evaluated, redesigned, and reimplemented with the aim of enhancing students’ understanding and skills related to designing and using particle detection instruments. The instructional sequence was divided into four stages: introduction to the activity, research and design of cloud chamber prototypes, observation of particle tracks, and analysis of collected data. In the initial stage, students were introduced to the fundamentals of the activity and provided with bibliographic resources to conduct autonomous research on cloud chamber functioning principles. During the design stage, students sourced materials and constructed their own prototypes, stimulating creativity and understanding of physics concepts like thermodynamics and material properties. The third stage focused on observing subatomic particles, where students recorded and analyzed the tracks generated in their chambers. Finally, critical reflection was encouraged regarding the instrument's operation and the nature of the particles observed. The results show that designing the cloud chamber motivates students and actively engages them in the learning process. Additionally, the use of this device introduces advanced scientific topics beyond particle physics, promoting a broader understanding of science. The study’s conclusions emphasize the need to provide students with ample time and space to thoroughly understand the role of materials and physical conditions in the functioning of their prototypes and to encourage critical analysis of the obtained data. This project not only highlights the importance of interdisciplinarity in science education but also provides a practical framework for teachers to adapt complex concepts for educational contexts where these topics are often absent.Keywords: cloud chamber, particle physics, secondary education, instructional design, design-based research, STEM
Procedia PDF Downloads 1615737 Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language
Authors: Stephen Wagman
Abstract:
This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles.Keywords: development of the argument, lists, noun structures, subordination
Procedia PDF Downloads 24815736 An Epistemological Approach of the Social Movements Studies in Cali (Colombia) between 2002 and 2016
Authors: Faride Crespo Razeg, Beatriz Eugenia Rivera Pedroza
Abstract:
While Colombian’s society has changed, the way that Colombian’s civil society participates has changed too. Thus, the social movements as a form of participation should be research to understand as the society structure as the groups’ interactions. In fact, in the last decades, the social movements in Colombia have been transformed in three categories: actors, spaces, and demands. For this reason, it is important to know from what perspectives have been researched this topic, allowing to recognize an epistemological and ontological reflections of it. The goal of this research has been characterizing the social movements of Cali – Colombia between 2002 and 2016. Cali is the southwest largest Colombian city; for this reason, it could be considered as a representative data for the social dynamic of the region. Qualitative methods as documental analysis have been used, in order to know the way that the research on social movements has been done. Thus taking into account this methodological technique, it has been found the goals that are present in most of the studies, which represents what are the main concerns around this topic. Besides, the methodology more used, to understand the way that the data was collected, its problems and its advantages. Finally, the ontological and epistemological reflections are important to understand which have been the theory and conceptual approach of the studies and how its have been contextualized to Cali, taking into account its own history.Keywords: social movements, civil society, forms of participation, collective actions
Procedia PDF Downloads 28915735 The Trajectory of the Ball in Football Game
Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar
Abstract:
Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter
Procedia PDF Downloads 46215734 Automatic Vehicle Detection Using Circular Synthetic Aperture Radar Image
Authors: Leping Chen, Daoxiang An, Xiaotao Huang
Abstract:
Automatic vehicle detection using synthetic aperture radar (SAR) image has been widely researched, as well as using optical remote sensing images. However, most researches treat the detection as an independent problem, failing to make full use of SAR data information. In circular SAR (CSAR), the two long borders of vehicle will shrink if the imaging surface is set higher than the reference one. Based on above variance, an automatic vehicle detection using CSAR image is proposed to enhance detection ability under complex environment, such as vehicles’ closely packing, which confuses the detector. The detection method uses the multiple images generated by different height plane to obtain an energy-concentrated image for detecting and then uses the maximally stable extremal regions method (MSER) to detect vehicles. A result of vehicles’ detection is given to verify the effectiveness and correctness of proposed method.Keywords: circular SAR, vehicle detection, automatic, imaging
Procedia PDF Downloads 37115733 Management of First Trimester Miscarriage
Authors: Madeleine Cox
Abstract:
Objective; analyse patient choices in management of first trimester miscarriage, rates of complications including repeat procedure. Design: all first trimester miscarriages from a tertiary institution on the Gold Coast in a 6 month time frame (July to December 2021) were reviewed, including choice of management, histopathology, any representations or admissions, and potential complications. Results: a total of 224 first trimester miscarriages were identified. Of these, 183 (81%) opted to have surgical management in the first instance. Of the remaining patients, 18 (8%) opted to have medical management, and 28 (12.5%) opted to have expectant management. In total, 33(15%) patients required a repeat treatment for retained products. 1 had medical management for a small volume PROC post suction curette. A significant number of these patients initially opted for medical management but then elected to have shorter follow up than usual and went on to have retained products noted. 5 women who had small volumes of RPOC post medical or surgical management had repeat suction curette, however, had very small volumes of products on scan and on curette and may have had a good result with repeated misoprostol administration. It is important to note that whilst a common procedure, suction curettes are not without risk. 2 women had significant blood loss of 1L and 1.5L. A third women had a uterine perforation, a rare but recognised complication, she went on to require a laparoscopy which identified a small serosal bowel injury which was closed by the colorectal team. Conclusion: Management of first trimester miscarriage should be guided by patient preference. It is important to be able to provide patients with their choice of management, however, it is also important to have a good understanding of the risks of each management choice, chances of repeated procedure, appropriate time frame for follow up. Women who choose to undertake medical or expectant management should be supported through this time, with appropriate time frame between taking misoprostol and repeat scan so that the true effects can be evaluated. Patients returning for scans within 2-3 days are more likely to be booked for further surgery, however, may reflect patients who did not have adequate counselling or simply changed their mind on their preferred management options.Keywords: miscarriage, gynaecology, obstetrics, first trimester
Procedia PDF Downloads 10215732 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region
Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.Keywords: airport, hydrodynamics, safe grade elevation, tides
Procedia PDF Downloads 26215731 Financial Instrument with High Investment Risk on the Warsaw Stock Exchange
Authors: Piotr Prewysz-Kwinto
Abstract:
The market of financial instruments with high risk is developing very dynamically in recent years and attracts more and more interest of investors. It consists essentially of two groups of instruments, i.e. derivatives and exchange traded product (ETP), and each year new types are introduced and offered to investors. The aim of this paper is to present the principles concerning financial instruments with high investment risk available on the Warsaw Stock Exchange (WSE), because they have quite complex constructions, and to evaluate the development of this market. In order to achieve this aim, statistical data from 2014-2016 was analyzed. The results confirm that the financial instruments with high investment risk available on the WSE constitute a diversified and the most numerous group of financial instruments and attract the most interest of investors. Responsible investing requires, however, a good knowledge of how they work and how they can generate profit to not expose oneself to unexpected losses.Keywords: derivatives, exchange traded products (ETP), financial instruments, financial market, risk, stock exchange
Procedia PDF Downloads 38215730 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 25615729 Estimation of Transition and Emission Probabilities
Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi
Abstract:
Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics
Procedia PDF Downloads 48415728 Computational Analysis and Daily Application of the Key Neurotransmitters Involved in Happiness: Dopamine, Oxytocin, Serotonin, and Endorphins
Authors: Hee Soo Kim, Ha Young Kyung
Abstract:
Happiness and pleasure are a result of dopamine, oxytocin, serotonin, and endorphin levels in the body. In order to increase the four neurochemical levels, it is important to associate daily activities with its corresponding neurochemical releases. This includes setting goals, maintaining social relationships, laughing frequently, and exercising regularly. The likelihood of experiencing happiness increases when all four neurochemicals are released at the optimal level. The achievement of happiness is important because it increases healthiness, productivity, and the ability to overcome adversity. To process emotions, electrical brain waves, brain structure, and neurochemicals must be analyzed. This research uses Chemcraft and Avogadro to determine the theoretical and chemical properties of the four neurochemical molecules. Each neurochemical molecule’s thermodynamic stability is calculated to observe the efficiency of the molecules. The study found that among dopamine, oxytocin, serotonin, alpha-, beta-, and gamma-endorphin, beta-endorphin has the lowest optimized energy of 388.510 kJ/mol. Beta-endorphin, a neurotransmitter involved in mitigating pain and stress, is the most thermodynamically stable and efficient molecule that is involved in the process of happiness. Through examining such properties of happiness neurotransmitters, the science of happiness is better understood.Keywords: happiness, neurotransmitters, positive psychology, dopamine, oxytocin, serotonin, endorphins
Procedia PDF Downloads 15515727 Spiritual Causes of Unusual Happenings in Life: An Analytical Study in Religious Perspective
Authors: Muhammad Samiullah
Abstract:
Unquestionably, Human life has been complex from the beginning. In the modern era, with all advancements in science and technology, this complexity is increasing day by day, and human life is becoming more and more difficult its survive. The world has become more mysterious than before. Human beings are facing unusual happenings and blockages in their lives in the form of illnesses, diseases, relationship problems, and hurdles in the economy with all their advanced knowledge, information, and exposure to the universe and themselves as well. This paper will discuss and analyze the underlying spiritual causes and their effects on human life and also suggests their remedies from an Islamic perspective, i.e., in the light of Theology and Islamic literature. Hermeneutics, narrative, and case study approach are adopted within the qualitative methodology in our findings throughout the research. In our outcomes, we will see that Islam eloquently and adequately describes the spiritual causes and factors regarding the unusual foundations and their effects on human life and also provides the remedies and cures to overcome these blockages.Keywords: religious psychology, spiritual theology, Islam and spirituality, unusual happenings
Procedia PDF Downloads 9815726 Ecological Evaluation and Conservation Strategies of Economically Important Plants in Indian Arid Zone
Authors: Sher Mohammed, Purushottam Lal, Pawan K. Kasera
Abstract:
The Thar Desert of Rajasthan covers a wide geographical area spreading between 23.3° to 30.12°, North latitude and 69.3◦ to 76◦ Eastern latitudes; having a unique spectrum of arid zone vegetation. This desert is spreading over 12 districts having a rich source of economically important/threatened plant diversity interacting and growing with adverse climatic conditions of the area. Due to variable geological, physiographic, climatic, edaphic and biotic factors, the arid zone medicinal flora exhibit a wide collection of angiosperm families. The herbal diversity of this arid region is medicinally important in household remedies among tribal communities as well as in traditional systems. The on-going increasing disturbances in natural ecosystems are due to climatic and biological, including anthropogenic factors. The unique flora and subsequently dependent faunal diversity of the desert ecosystem is losing its biotic potential. A large number of plants have no future unless immediate steps are taken to arrest the causes, leading to their biological improvement. At present the potential loss in ecological amplitude of various genera and species is making several plant species as red listed plants of arid zone vegetation such as Commmiphora wightii, Tribulus rajasthanensis, Calligonum polygonoides, Ephedra foliata, Leptadenia reticulata, Tecomella undulata, Blepharis sindica, Peganum harmala, Sarcostoma vinimale, etc. Mostly arid zone species are under serious pressure against prevailing ecosystem factors to continuation their life cycles. Genetic, molecular, cytological, biochemical, metabolic, reproductive, germination etc. are the several points where the floral diversity of the arid zone area is facing severe ecological influences. So, there is an urgent need to conserve them. There are several opportunities in the field to carry out remarkable work at particular levels to protect the native plants in their natural habitat instead of only their in vitro multiplication.Keywords: ecology, evaluation, xerophytes, economically, threatened plants, conservation
Procedia PDF Downloads 26715725 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model
Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom
Abstract:
Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model
Procedia PDF Downloads 3215724 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 10815723 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission
Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan
Abstract:
As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster
Procedia PDF Downloads 21015722 Microstructural and Magnetic Properties of Ni50Mn39Sn11 and Ni50Mn36Sn14 Heusler Alloys
Authors: Mst Nazmunnahar, Juan del Val, Alena Vimmrova, Blanca Hernando, Julian González
Abstract:
We report the microstructural and magnetic properties of Ni50Mn39Sn11 and Ni50Mn36Sn14 ribbon Heusler alloys. Experimental results were obtained by differential scanning calorymetry, X-ray diffraction and vibrating sample magnetometry techniques. The Ni-Mn-Sn system undergoes a martensitic structural transformation in a wide temperature range. For example, for Ni50Mn39Sn11 the start and finish temperatures of the martensitic and austenite phase transformation for ribbon alloy were Ms = 336K , Mf = 328K, As = 335K and Af = 343K whereas no structural transformation is observed for Ni50Mn36Sn14 alloys. Magnetic measurements show the typical ferromagnetic behavior with Curie temperature 207K at low applied field of 50 Oe. The complex behavior exhibited by these Heusler alloys should be ascribed to the strong coupling between magnetism and structure, being their magnetic behavior determined by the distance between Mn atoms.Keywords: as-cast ribbon, Heusler alloys, magnetic properties, structural transformation
Procedia PDF Downloads 45815721 Numerical Investigation on Optimizing Fatigue Life in a Lap Joint Structure
Authors: P. Zamani, S. Mohajerzadeh, R. Masoudinejad, K. Farhangdoost
Abstract:
The riveting process is one of the important ways to keep fastening the lap joints in aircraft structures. Failure of aircraft lap joints directly depends on the stress field in the joint. An important application of riveting process is in the construction of aircraft fuselage structures. In this paper, a 3D finite element method is carried out in order to optimize residual stress field in a riveted lap joint and also to estimate its fatigue life. In continue, a number of experiments are designed and analyzed using design of experiments (DOE). Then, Taguchi method is used to select an optimized case between different levels of each factor. Besides that, the factor which affects the most on residual stress field is investigated. Such optimized case provides the maximum residual stress field. Fatigue life of the optimized joint is estimated by Paris-Erdogan law. Stress intensity factors (SIFs) are calculated using both finite element analysis and experimental formula. In addition, the effect of residual stress field, geometry, and secondary bending are considered in SIF calculation. A good agreement is found between results of such methods. Comparison between optimized fatigue life and fatigue life of other joints has shown an improvement in the joint’s life.Keywords: fatigue life, residual stress, riveting process, stress intensity factor, Taguchi method
Procedia PDF Downloads 45415720 The Effect of Taxpayer Political Beliefs on Tax Evasion Behavior: An Empirical Study Applied to Tunisian Case
Authors: Nadia Elouaer
Abstract:
Tax revenue is the main state resource and one of the important variables in tax policy. Nevertheless, this resource is continually decreasing, so it is important to focus on the reasons for this decline. Several studies show that the taxpayer is reluctant to pay taxes, especially in countries at risk or in countries in transition, including Tunisia. This study focuses on the tax evasion behavior of a Tunisian taxpayer under the influence of his political beliefs, as well as the influence of different tax compliance variables. Using a questionnaire, a sample of 500 Tunisian taxpayers is used to examine the relationship between political beliefs and taxpayer affiliations and tax compliance variables, as well as the study of the causal link between political beliefs and fraudulent behavior. The data were examined using correlation, factor, and regression analysis and found a positive and statistically significant relationship between the different tax compliance variables and the tax evasion behavior. There is also a positive and statistically significant relationship between tax evasion and political beliefs and affiliations. The study of the relationship between political beliefs and compliance variables shows that they are closely related. The conclusion is to admit that tax evasion and political beliefs are closely linked, and the government should update its tax policy and modernize its administration in order to strengthen the credibility and disclosure of information in order to restore a relationship of trust between public authorities and the taxpayer.Keywords: fiscal policy, political beliefs, tax evasion, taxpayer behavior
Procedia PDF Downloads 15115719 Analysis of Thermal Damping in Si Based Torsional Micromirrors
Authors: R. Resmi, M. R. Baiju
Abstract:
The thermal damping of a dynamic vibrating micromirror is an important factor affecting the design of MEMS based actuator systems. In the development process of new micromirror systems, assessing the extent of energy loss due to thermal damping accurately and predicting the performance of the system is very essential. In this paper, the depth of the thermal penetration layer at different eigenfrequencies and the temperature variation distributions surrounding a vibrating micromirror is analyzed. The thermal penetration depth corresponds to the thermal boundary layer in which energy is lost which is a measure of the thermal damping is found out. The energy is mainly dissipated in the thermal boundary layer and thickness of the layer is an important parameter. The detailed thermoacoustics is used to model the air domain surrounding the micromirror. The thickness of the boundary layer, temperature variations and thermal power dissipation are analyzed for a Si based torsional mode micromirror. It is found that thermal penetration depth decreases with eigenfrequency and hence operating the micromirror at higher frequencies is essential for reducing thermal damping. The temperature variations and thermal power dissipations at different eigenfrequencies are also analyzed. Both frequency-response and eigenfrequency analyses are done using COMSOL Multiphysics software.Keywords: Eigen frequency analysis, micromirrors, thermal damping, thermoacoustic interactions
Procedia PDF Downloads 368