Search results for: biologically inspired algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4215

Search results for: biologically inspired algorithm

2145 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 88
2144 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 272
2143 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 217
2142 Quaternized PPO/PSF Anion Exchange Membranes Doped with ZnO-Nanoparticles for Fuel Cell Application

Authors: P. F. Msomi, P. T. Nonjola, P. G. Ndungu, J. Ramontja

Abstract:

In view of the projected global energy demand and increasing levels of greenhouse gases and pollutants issues have inspired an intense search for alternative new energy technologies, which will provide clean, low cost and environmentally friendly solutions to meet the end user requirements. Alkaline anion exchange membrane fuel cells (AAEMFC) have been recognized as ideal candidates for the generation of such clean energy for future stationary and mobile applications due to their many advantages. The key component of the AAEMFC is the anion exchange membrane (AEM). In this report, a series of quaternized poly (2.6 dimethyl – 1.4 phenylene oxide)/ polysulfone (QPPO/PSF) blend anionic exchange membranes (AEM) were successfully fabricated and characterized for alkaline fuel cell application. Zinc Oxide (ZnO) nanoparticles were introduced in the polymer matrix to enhance the intrinsic properties of the AEM. The characteristic properties of the QPPO/PSF and QPPO/PSF-ZnO blend membrane were investigated with X-ray diffraction (XRD), thermogravimetric analysis (TGA) scanning electron microscope (SEM) and contact angle (CA). To confirm successful quaternisation, FT-IR spectroscopy and proton nuclear magnetic resonance (1H NMR) were used. Other properties such as ion exchange capacity (IEC), water uptake, contact angle and ion conductivity (IC) were also undertaken to check if the prepared nanocomposite materials are suitable for fuel cell application. The membrane intrinsic properties were found to be enhanced by the addition of ZnO nanoparticles. The addition of ZnO nanoparticles resulted to a highest IEC of 3.72 mmol/g and a 30-fold IC increase of the nanocomposite due to its lower methanol permeability. The above results indicate that QPPO/PSF-ZnO is a good candidate for AAEMFC application.

Keywords: anion exchange membrane, fuel cell, zinc oxide nanoparticle, nanocomposite

Procedia PDF Downloads 428
2141 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 140
2140 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
2139 Modeling the Downstream Impacts of River Regulation on the Grand Lake Meadows Complex using Delft3D FM Suite

Authors: Jaime Leavitt, Katy Haralampides

Abstract:

Numerical modelling has been used to investigate the long-term impact of a large dam on downstream wetland areas, specifically in terms of changing sediment dynamics in the system. The Mactaquac Generating Station (MQGS) is a 672MW run-of-the-river hydroelectric facility, commissioned in 1968 on the mainstem of the Wolastoq|Saint John River in New Brunswick, Canada. New Brunswick Power owns and operates the dam and has been working closely with the Canadian Rivers Institute at UNB Fredericton on a multi-year, multi-disciplinary project investigating the impact the dam has on its surrounding environment. With focus on the downstream river, this research discusses the initialization, set-up, calibration, and preliminary results of a 2-D hydrodynamic model using the Delft3d Flexible Mesh Suite (successor of the Delft3d 4 Suite). The flexible mesh allows the model grid to be structured in the main channel and unstructured in the floodplains and other downstream regions with complex geometry. The combination of grid types improves computational time and output. As the movement of water governs the movement of sediment, the calibrated and validated hydrodynamic model was applied to sediment transport simulations, particularly of the fine suspended sediments. Several provincially significant Protected Natural Areas and federally significant National Wildlife Areas are located 60km downstream of the MQGS. These broad, low-lying floodplains and wetlands are known as the Grand Lake Meadows Complex (GLM Complex). There is added pressure to investigate the impacts of river regulation on these protected regions that rely heavily on natural river processes like sediment transport and flooding. It is hypothesized that the fine suspended sediment would naturally travel to the floodplains for nutrient deposition and replenishment, particularly during the freshet and large storms. The purpose of this research is to investigate the impacts of river regulation on downstream environments and use the model as a tool for informed decision making to protect and maintain biologically productive wetlands and floodplains.

Keywords: hydrodynamic modelling, national wildlife area, protected natural area, sediment transport.

Procedia PDF Downloads 6
2138 Anti-Inflammatory Effect of Carvedilol 1% Ointment in Topical Application to the Animal Model

Authors: Berina Pilipović, Saša Pilipović, Maja Pašić-Kulenović

Abstract:

Inflammation is the body's response to impaired homeostasis caused by infection, injury or trauma resulting in systemic and local effects. Inflammation causes the body's response to injury and is characterized by a series of events including inflammatory response, response to pain receptors and the recovery process. Inflammation can be acute and chronic. The inflammatory response is described in three different phases. Free radical is an atom or molecule that has the unpaired electron and is therefore generally very reactive chemical species. Biologically important example of reaction with free radicals is called Lipid peroxidation (LP). Lipid peroxidation reactions occur in biological membranes, and if at the outset is not stopped with the action of antioxidants, it will bring damage to the membrane, which results in partial or complete loss of their physiological functions. Calcium antagonists and beta-adrenergic receptor antagonists are known drugs, and for many years and widely used in the treatment of cardiovascular diseases. Some of these compounds also show antioxidant activity. The mechanism of antioxidant activities of calcium antagonists and beta-blockers is unknown, since their structure varies widely. This research investigated the possible local anti-inflammatory activity of ointments containing 1% carvedilol in the white petrolatum USP. Ear inflammation was induced by 3% croton oil acetone solution, in quantity of 10 µl on both mouse ears. Albino Swiss mouse (n = 8) are treated with 2.5 mg/ear ointment, and control group was treated on the same way as previous with hydrocortisone 1% ointment (2.5 mg/ear). The other ear of the same animal was used as control one. Ointments were administered once per day, on the left ear. After treatment, ears were observed for three days. After three days, we measured mass (mg) of 6 mm ear punch of treated and controlled ears. The results of testing anti-inflammatory effects of ointments with carvedilol in the mouse ear model show stronger observed effect than ointment with 1% hydrocortisone in the same basis. Identical results were confirmed by the difference between the mass of 6 mm ears punch. The results were also confirmed by histological examination. Ointments with carvedilol showed significant reduction of the inflammation process caused by croton oil on the mouse inflammation model.

Keywords: antioxidant, carvedilol, inflammation, mouse ear

Procedia PDF Downloads 234
2137 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 79
2136 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster

Authors: Peng Hou

Abstract:

Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.

Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning

Procedia PDF Downloads 282
2135 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 103
2134 Use of 3D Printed Bioscaffolds from Decellularized Umbilical Cord for Cartilage Regeneration

Authors: Tayyaba Bari, Muhammad Hamza Anjum, Samra Kanwal, Fakhera Ikram

Abstract:

Osteoarthritis, a degenerative condition, affects more than 213 million individuals globally. Since articular cartilage has no or limited vessels, therefore, after deteriorating, it is unable to rejuvenate. Traditional approaches for cartilage repair, like autologous chondrocyte implantation, microfracture and cartilage transplantation are often associated with postoperative complications and lead to further degradation. Decellularized human umbilical cord has gained interest as a viable treatment for cartilage repair. Decellularization removes all cellular contents as well as debris, leaving a biologically active 3D network known as extracellular matrix (ECM). This matrix is biodegradable, non-immunogenic and provides a microenvironment for homeostasis, growth and repair. UC derived bioink function as 3D scaffolding material, not only mediates cell-matrix interactions but also adherence, proliferation and propagation of cells for 3D organoids. This study comprises different physical, chemical and biological approaches to optimize the decellularization of human umbilical cord (UC) tissues followed by the solubilization of these tissues to bioink formation. The decellularization process consisted of two cycles of freeze thaw where the umbilical cord at -20˚C was thawed at room temperature followed by dissection in small sections from 0.5 to 1cm. Similarly decellularization with ionic and non-ionic detergents Sodium dodecyl sulfate (SDS) and Triton-X 100 revealed that both concentrations of SDS i.e 0.1% and 1% were effective in complete removal of cells from the small UC tissues. The results of decellularization was further confirmed by running them on 1% agarose gel. Histological analysis revealed the efficacy of decellularization, which involves paraffin embedded samples of 4μm processed for Hematoxylin-eosin-safran and 4,6-diamidino-2-phenylindole (DAPI). ECM preservation was confirmed by Alcian Blue, and Masson’s trichrome staining on consecutive sections and images were obtained. Sulfated GAG’s content were determined by 1,9-dimethyl-methylene blue (DMMB) assay, similarly collagen quantification was done by hydroxy proline assay. This 3D bioengineered scaffold will provide a typical atmosphere as in the extracellular matrix of the tissue, which would be seeded with the mesenchymal cells to generate the desired 3D ink for in vitro and in vivo cartilage regeneration applications.

Keywords: umbilical cord, 3d printing, bioink, tissue engineering, cartilage regeneration

Procedia PDF Downloads 100
2133 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 186
2132 Biomimetic Architecture: The Bio Process to an Eco-Friendly Design

Authors: Odeyemi Ifeoluwayemi, Maha Joushua, Fulani Omoyeni

Abstract:

In the search for sustainability, over time, architectural approaches to design have moved from just nature inspired design to the study of nature’s principles to produce effective designs that solve the issue of sustainability. Nature has established materials, shapes and processes that are effective right from a minor scale to an enormous scale. A branch of human knowledge that studies nature is called biology. Biology helps us to grasp and understand nature. Biomimicry is a new way of viewing and valuing nature, based not on what we can extract from the natural world but on what we can learn from it. Life has sustained on the earth for the last 3.85 billion years, and it is necessary for us to find out how life has been able to stay sustained for that long. The building must teach the society new ecological morals, thus, a better understanding of how nature works can usefully inspire architectural designs to resolve issues that have already been resolved by nature. This will not only help in creating a healthy environment but will also produce positive environmental impacts. Biomimetic Architecture connects and reproduces the ideologies found in nature in order to create built environment which benefit people and other living creatures as well as preserving it for the future. Understanding the bioprocess would lead to the establishment of ecological approaches that serve as a platform for creating a built environment that goes beyond sustaining current settings but also mimic nature’s regenerative ecosystem. This paper aims to explain these design methods under the name of biomimicry and biomimetic architecture by reviewing literature and research works done by examining these approaches classified as forms, processes and ecosystems. It is expected that this research will provide information that would, therefore, lead to the creation of buildings that are eco-friendly and provide greater comfort to the populaces.

Keywords: biomimetic architecture, biomimicry, ecological design, nature

Procedia PDF Downloads 256
2131 The Effect of Gender Inequality on Reproductive Health in Africa: The Case of Cultural Ghana

Authors: Edna Roseline Dede Tetteh

Abstract:

Reproductive health research and discussions have, over the years, placed a special focus on Africa. This is partly due to the significant relationship between African cultures and reproductive health. Several studies have also acknowledged the economic impact of reproductive health in Africa, because of which reproductive health, particularly family planning, has featured prominently in many economic discussions about Africa. Gender, which is a major element of most African cultures, inspired this study. Given that gender has a significant cultural influence in Africa, the study examined the effect of gender inequality on reproductive health in Africa, with a special focus on Ghana. Specifically, the study examined whether there exists any relationship between gender inequality and reproductive health and, if there is, what the nature and the effect of the relationship are. The study's findings were based on data gathered from 2304 respondents, randomly selected from Ghana's different tribes and ethnic groups. Given that the study was focused on the influence of gender in sexual relationships, the study’s population was people 16 years and above since 16 is the legal age of sexual consent in Ghana. Data was collected through questionnaires and interviews. It was found that the beliefs and practices of the traditional Ghanaian society, like most African societies, have direct and significant impacts on reproductive health. Males in these cultures have more control over reproductive health decisions and choices than females. The study found that it was culturally condemnable for a wife to refuse her husband’s request for sex, even when she is not in the mood for sex, or she is unwell. It was further found that, when it comes to the decision of birth control, males have more power. Consequently, females with reproductive health conditions have no control over choices that support their reproductive health conditions; they must always satisfy their husbands’ sexual needs. Most of the female respondents indicated they had less or no control over protecting themselves from reproductive health risks unless they had the understanding and support of their sexual partners.

Keywords: culture, gender, Ghana, inequality, reproductive health

Procedia PDF Downloads 29
2130 Secondary Radiation in Laser-Accelerated Proton Beamline (LAP)

Authors: Seyed Ali Mahdipour, Maryam Shafeei Sarvestani

Abstract:

Radiation pressure acceleration (RPA) and target normal sheath acceleration (TNSA) are the most important methods of Laser-accelerated proton beams (LAP) planning systems.LAP has inspired novel applications that can benefit from proton bunch properties different from conventionally accelerated proton beams. The secondary neutron and photon produced in the collision of protons with beamline components are of the important concern in proton therapy. Various published Monte Carlo researches evaluated the beamline and shielding considerations for TNSA method, but there is no studies directly address secondary neutron and photon production from RPA method in LAP. The purpose of this study is to calculate the flux distribution of neutron and photon secondary radiations on the first area ofLAP and to determine the optimize thickness and radius of the energyselector in a LAP planning system based on RPA method. Also, we present the Monte Carlo calculations to determine the appropriate beam pipe for shielding a LAP planning system. The GEANT4 Monte Carlo toolkit has been used to simulate a secondary radiation production in LAP. A section of new multifunctional LAP beamlinehas been proposed, based on the pulsed power solenoid scheme as a GEANT4 toolkit. The results show that the energy selector is the most important source of neutron and photon secondary particles in LAP beamline. According to the calculations, the pure Tungsten energy selector not be the proper case, and using of Tungsten+Polyethylene or Tungsten+Graphitecomposite selectors will reduce the production of neutron and photon intensities by approximately ~10% and ~25%, respectively. Also the optimal radiuses of energy selectors were found to be ~4 cm and ~6 cm for a 3 degree and 5 degree proton deviation angles, respectively.

Keywords: neutron, photon, flux distribution, energy selector, GEANT4 toolkit

Procedia PDF Downloads 104
2129 Relationship between Cinema and Culture: Reel and Real life in India

Authors: Prachi Chavda

Abstract:

The world, as of today, is smaller than it was for those who lived few decades ago. Internet, media and telecommunications have impacted the world like never before. Culture is the pillar upon which a society mushrooms. A culture develops with human creativity over the years and also by the exchange and intermixing of ideas and way of life across different civilizations and we can say that one of the influencing medium of exchange and intermixing of these ideas is cinema. Cinema has been the wonderful as well as important medium of communication since it has been emerged. Change is the thumb rule of life and so have been Indian cinema. As society has evolved from time to time so has the stories of Indian Cinema and its characters, hence it directly effects to the Indian culture as cinema has been very strong mediator for information exchange. The paper tries to discuss deeply how Indian cinema (reel life) and Indian culture (real life) has been influencing each other that results into a constant modification in both. Moreover, the research tries to deal with the issue with some examples that as a outcome how movies impact the Indian culture positively and negatively on culture. Therefore, it spreads the wave of change in cultural settings of society. The paper also tries to light the psychology of youth of India. Today, children and youth greatly admire the ostentatious materialistic display of outfits and style of the actors in the movies. Also, the movies bearing romanticism and showcasing disputatious issues like pre-marital sex, live-in relationship, homo-sexuality etc. though without highlighting them extensively have indeed inspired the commoners. Pros and cons always exist. Such revelation of issues certainly give a spark in the minds of those who are in their formative years and the effect of which is seen with the passage of time Thus, we can say that emergence of cinema as a strong tool of social change as well as culture as a triggering factor for transformation in cinema. As, a finding we can say that culture and cinema of India are influencing factors for each other. Cinema and culture are two sides of a coin, where both are responsible for evolution of each other.

Keywords: cinema, culture, influence, transformation

Procedia PDF Downloads 397
2128 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 302
2127 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 78
2126 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance

Authors: Abdullah Al Farwan, Ya Zhang

Abstract:

In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.

Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance

Procedia PDF Downloads 166
2125 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 115
2124 Aerodynamic Optimum Nose Shape Change of High-Speed Train by Design Variable Variation

Authors: Minho Kwak, Suhwan Yun, Choonsoo Park

Abstract:

Nose shape optimizations of high-speed train are performed for the improvement of aerodynamic characteristics. Based on the commercial train, KTX-Sancheon, multi-objective optimizations are conducted for the improvement of the side wind stability and the micro-pressure wave following the optimization for the reduction of aerodynamic drag. 3D nose shapes are modelled by the Vehicle Modeling Function. Aerodynamic drag and side wind stability are calculated by three-dimensional compressible Navier-Stokes solver, and micro pressure wave is done by axi-symmetric compressible Navier-Stokes solver. The Maxi-min Latin Hypercube Sampling method is used to extract sampling points to construct the approximation model. The kriging model is constructed for the approximation model and the NSGA-II algorithm was used as the multi-objective optimization algorithm. Nose length, nose tip height, and lower surface curvature are design variables. Because nose length is a dominant variable for aerodynamic characteristics of train nose, two optimization processes are progressed respectively with and without the design variable, nose length. Each pareto set was obtained and each optimized nose shape is selected respectively considering Honam high-speed rail line infrastructure in South Korea. Through the optimization process with the nose length, when compared to KTX Sancheon, aerodynamic drag was reduced by 9.0%, side wind stability was improved by 4.5%, micro-pressure wave was reduced by 5.4% whereas aerodynamic drag by 7.3%, side wind stability by 3.9%, micro-pressure wave by 3.9%, without the nose length. As a result of comparison between two optimized shapes, similar shapes are extracted other than the effect of nose length.

Keywords: aerodynamic characteristics, design variable, multi-objective optimization, train nose shape

Procedia PDF Downloads 347
2123 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 61
2122 Bioactivities and Phytochemical Studies of Acrocarpus fraxinifolius Bark Wight and Arn

Authors: H. M. El-Rafie, A. H. Abou Zeid, R. S. Mohammed, A. A. Sleem

Abstract:

Acrocarpus is a genus of flowering plants in the legume family Fabaceae which considered as a large and economically important family. This study aimed to investigate the phytoconstituents of the petroleum ether extract (PEE) of Acrocarpus fraxinofolius bark by Gas chromatography coupled with mass spectrometry (GC/MS) analysis of its fractions (fatty acid and unsaponifiable matter). Concerning this, identification of 52 compounds constituting 97.03 % of the total composition of the unsaponifiable matter fraction. Cycloeucalenol was found to be the major compound representing 32.52% followed by 4a, 14a-dimethyl-A8~24(28)-ergostadien (26.50%) and ß-sitosterol(13.74%), furthermore Gas liquid chromatography (GLC) analysis of the sterol fraction revealed the identification of cholesterol (7.22 %), campesterol (13.30 %), stigmasterol (10.00 %) and β - sitosterol (69.48 %). Meanwhile, the identification of 33 fatty acids representing 90.71% of the total fatty acid constituents. Methyl-9,12-octadecadienoate (40.39%) followed by methyl hexadecanoate (23.64%) were found to be the major compounds. On the other hand, column chromatography and Thin layer chromatography (TLC) fractionation of PEE separate the triterpenoid: 21β-hydroxylup-20(29)-en-3-one and β- amyrin which were structurally identified by spectroscopic analysis (NMR, MS and IR). PEE has been biologically evaluated for 1: management of diabetes in alloxan induced diabetic rats 2: cytotoxic activity against four human tumor cell lines (Cervix carcinoma cell line[HELA], Breast carcinoma cell line [MCF7], Liver carcinoma cell line[HEPG2] and Colon carcinoma cell line[HCT-116] 3: hepatoprotective activity against CCl4-induced hepatotoxicity in rats and the activity was studied by assaying the serum marker enzymes like AST, ALT, and ALP. Concerning this, the anti-diabetic activity exhibited by 100mg of PEE extract was 74.38% relative to metformin (100% potency). It also showed a significant anti-proliferative activity against MCF-7 (IC50= 2.35µg), Hela(IC50=3.85µg) and HEPG-2 (IC50= 9.54µg) compared with Doxorubicin as reference drug. The hepatoprotective activity was evidenced by significant decrease in liver function enzymes, i.e. AST, ALT and ALP by (29.18%, 28.26%, and 34.11%, respectively using silymarin as the reference drug, compared to their concentration levels in an untreated group with liver damage induced by CCl₄. This study was performed for the first time on the bark of this species.

Keywords: Acrocarpus fraxinofolius, antidiabetic, cytotoxic, hepatoprotective

Procedia PDF Downloads 196
2121 Target-Triggered DNA Motors and their Applications to Biosensing

Authors: Hongquan Zhang

Abstract:

Inspired by endogenous protein motors, researchers have constructed various synthetic DNA motors based on the specificity and predictability of Watson-Crick base pairing. However, the application of DNA motors to signal amplification and biosensing is limited because of low mobility and difficulty in real-time monitoring of the walking process. The objective of our work was to construct a new type of DNA motor termed target-triggered DNA motors that can walk for hundreds of steps in response to a single target binding event. To improve the mobility and processivity of DNA motors, we used gold nanoparticles (AuNPs) as scaffolds to build high-density, three-dimensional tracks. Hundreds of track strands are conjugated to a single AuNP. To enable DNA motors to respond to specific protein and nucleic acid targets, we adapted the binding-induced DNA assembly into the design of the target-triggered DNA motors. In response to the binding of specific target molecules, DNA motors are activated to autonomously walk along AuNP, which is powered by a nicking endonuclease or DNAzyme-catalyzed cleavage of track strands. Each moving step restores the fluorescence of a dye molecule, enabling monitoring of the operation of DNA motors in real time. The motors can translate a single binding event into the generation of hundreds of oligonucleotides from a single nanoparticle. The motors have been applied to amplify the detection of proteins and nucleic acids in test tubes and live cells. The motors were able to detect low pM concentrations of specific protein and nucleic acid targets in homogeneous solutions without the need for separation. Target-triggered DNA motors are significant for broadening applications of DNA motors to molecular sensing, cell imagining, molecular interaction monitoring, and controlled delivery and release of therapeutics.

Keywords: biosensing, DNA motors, gold nanoparticles, signal amplification

Procedia PDF Downloads 84
2120 Effect of Baffles on the Cooling of Electronic Components

Authors: O. Bendermel, C. Seladji, M. Khaouani

Abstract:

In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.

Keywords: electronic components, baffles, cooling, fluids engineering

Procedia PDF Downloads 297
2119 Ecological Ice Hockey Butterfly Motion Assessment Using Inertial Measurement Unit Capture System

Authors: Y. Zhang, J. Perez, S. Marnier

Abstract:

To date, no study on goaltending butterfly motion has been completed in real conditions, during an ice hockey game or training practice, to the author's best knowledge. This motion, performed to save score, is unnatural, intense, and repeated. The target of this research activity is to identify representative biomechanical criteria for this goaltender-specific movement pattern. Determining specific physical parameters may allow to will identify the risk of hip and groin injuries sustained by goaltenders. Four professional or academic goalies were instrumented during ice hockey training practices with five inertial measurement units. These devices were inserted in dedicated pockets located on each thigh and shank, and the fifth on the lumbar spine. A camera was also installed close to the ice to observe and record the goaltenders' activities, especially the butterfly motions, in order to synchronize the captured data and the behavior of the goaltender. Each data recorded began with a calibration of the inertial units and a calibration of the fully equipped goaltender on the ice. Three butterfly motions were recorded out of the training practice to define referential individual butterfly motions. Then, a data processing algorithm based on the Madgwick filter computed hip and knee joints joint range of motion as well as angular specific angular velocities. The developed algorithm software automatically identified and analyzed all the butterfly motions executed by the four different goaltenders. To date, it is still too early to show that the analyzed criteria are representative of the trauma generated by the butterfly motion as the research is only at its beginning. However, this descriptive research activity is promising in its ecological assessment, and once the criteria are found, the tools and protocols defined will allow the prevention of as many injuries as possible. It will thus be possible to build a specific training program for each goalie.

Keywords: biomechanics, butterfly motion, human motion analysis, ice hockey, inertial measurement unit

Procedia PDF Downloads 125
2118 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 337
2117 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 115
2116 Tree Dress and the Internet of Living Things

Authors: Vibeke Sorensen, Nagaraju Thummanapalli, J. Stephen Lansing

Abstract:

Inspired by the indigenous people of Borneo, Indonesia and their traditional bark cloth, artist and professor Vibeke Sorensen executed a “digital unwrapping” of several trees in Southeast Asia using a digital panorama camera and digitally “stitched” them together for printing onto sustainable silk and fashioning into the “Tree Dress”. This dress is a symbolic “un-wrapping” and “re-wrapping” of the tree’s bark onto a person as a second skin. The “digital bark” is directly responsive to the real tree through embedded and networked electronics that connect in real-time to sensors at the physical site of the living tree. LEDs and circuits inserted into the dress display the continuous measurement of the O2 / CO2, temperature, humidity, and light conditions at the tree. It is an “Internet of Living Things” (IOLT) textile that can be worn to track and interact with it. The computer system connecting the dress and the tree converts the gas emission data at the site of the real tree into sound and music as sonification. This communicates not only the scientific data but also translates it into a poetic representation. The wearer of the garment can symbolically identify with the tree, or “become one” with it by adorning its “skin.” In this way, the wearer also becomes a human agent for the tree, bringing its actual condition to direct perception of the wearer and others who may engage it. This project is an attempt to bring greater awareness to issues of deforestation by providing a direct access to living things separated by physical distance, and hopefully, to increase empathy for them by providing a way to sense individual trees and their daily existential condition through remote monitoring of data. Further extensions to this project and related issues of sustainability include the use of recycled and alternative plant materials such as bamboo and air plants, among others.

Keywords: IOLT, sonification, sustainability, tree, wearable technology

Procedia PDF Downloads 138