Search results for: cellular network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5277

Search results for: cellular network

987 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 97
986 Aluminum Based Hexaferrite and Reduced Graphene Oxide a Suitable Microwave Absorber for Microwave Application

Authors: Sanghamitra Acharya, Suwarna Datar

Abstract:

Extensive use of digital and smart communication createsprolong expose of unwanted electromagnetic (EM) radiations. This harmful radiation creates not only malfunctioning of nearby electronic gadgets but also severely affects a human being. So, a suitable microwave absorbing material (MAM) becomes a necessary urge in the field of stealth and radar technology. Initially, Aluminum based hexa ferrite was prepared by sol-gel technique and for carbon derived composite was prepared by the simple one port chemical reduction method. Finally, composite films of Poly (Vinylidene) Fluoride (PVDF) are prepared by simple gel casting technique. Present work demands that aluminum-based hexaferrite phase conjugated with graphene in PVDF matrix becomes a suitable candidate both in commercially important X and Ku band. The structural and morphological nature was characterized by X-Ray diffraction (XRD), Field emission-scanning electron microscope (FESEM) and Raman spectra which conforms that 30-40 nm particles are well decorated over graphene sheet. Magnetic force microscopy (MFM) and conducting force microscopy (CFM) study further conforms the magnetic and conducting nature of composite. Finally, shielding effectiveness (SE) of the composite film was studied by using Vector network analyzer (VNA) both in X band and Ku band frequency range and found to be more than 30 dB and 40 dB, respectively. As prepared composite films are excellent microwave absorbers.

Keywords: carbon nanocomposite, microwave absorbing material, electromagnetic shielding, hexaferrite

Procedia PDF Downloads 147
985 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 156
984 The Misuse of Social Media in Order to Exploit "Generation Y"; The Tactics of IS

Authors: Ali Riza Perçin, Eser Bingül

Abstract:

Internet technologies have created opportunities with which people share their ideologies, thoughts and products. This virtual world, named social media has given the chance of gathering individual users and people from the world's remote locations and establishing an interaction between them. However, to an increasingly higher degree terrorist organizations today use the internet and most notably social-network media to create the effects they desire through a series of on-line activities. These activities, designed to support their activities, include information collection (intelligence), target selection, propaganda, fundraising and recruitment to name a few. Meanwhile, these have been used as the most important tool for recruitment especially from the different region of the world, especially disenfranchised youth, in the West in order to mobilize support and recruit “foreign fighters.” The recruits have obtained the statue, which is not accessible in their society and have preferred the style of life that is offered by the terrorist organizations instead of their current life. Like other terrorist groups, for a while now the terrorist organization Islamic State (IS) in Iraq and Syria has employed a social-media strategy in order to advance their strategic objectives. At the moment, however, IS seems to be more successful in their on-line activities than other similar organizations. IS uses social media strategically as part of its armed activities and for the sustainability of their military presence in Syria and Iraq. In this context, “Generation Y”, which could exist at the critical position and undertake active role, has been examined. Additionally, the explained characteristics of “Generation Y” have been put forward and the duties of families and society have been stated as well.

Keywords: social media, "generation Y", terrorist organization, islamic state IS

Procedia PDF Downloads 403
983 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 80
982 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots

Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov

Abstract:

This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.

Keywords: autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem

Procedia PDF Downloads 132
981 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 103
980 A Program Evaluation of TALMA Full-Year Fellowship Teacher Preparation

Authors: Emilee M. Cruz

Abstract:

Teachers take part in short-term teaching fellowships abroad, and their preparation before, during, and after the experience is critical to affecting teachers’ feelings of success in the international classroom. A program evaluation of the teacher preparation within TALMA: The Israel Program for Excellence in English (TALMA) full-year teaching fellowship was conducted. A questionnaire was developed that examined professional development, deliberate reflection, and cultural and language immersion offered before, during, and after the short-term experience. The evaluation also surveyed teachers’ feelings of preparedness for the Israeli classroom and any recommendations they had for future teacher preparation within the fellowship program. The review suggests the TALMA program includes integrated professional learning communities between fellows and Israeli co-teachers, more opportunities for immersive Hebrew language learning, a broader professional network with Israelis, and opportunities for guided discussion with the TALMA community continued participation in TALMA events and learning following the full-year fellowship. Similar short-term international programs should consider the findings in the design of their participation preparation programs. The review also offers direction for future program evaluation of short-term participant preparation, including the need for frequent response item updates to match current offerings and evaluation of participant feelings of preparedness before, during, and after the full-year fellowship.

Keywords: educational program evaluation, international teaching, short-term teaching, teacher beliefs, teaching fellowship, teacher preparation

Procedia PDF Downloads 146
979 SIP Flooding Attacks Detection and Prevention Using Shannon, Renyi and Tsallis Entropy

Authors: Neda Seyyedi, Reza Berangi

Abstract:

Voice over IP (VOIP) network, also known as Internet telephony, is growing increasingly having occupied a large part of the communications market. With the growth of each technology, the related security issues become of particular importance. Taking advantage of this technology in different environments with numerous features put at our disposal, there arises an increasing need to address the security threats. Being IP-based and playing a signaling role in VOIP networks, Session Initiation Protocol (SIP) lets the invaders use weaknesses of the protocol to disable VOIP service. One of the most important threats is denial of service attack, a branch of which in this article we have discussed as flooding attacks. These attacks make server resources wasted and deprive it from delivering service to authorized users. Distributed denial of service attacks and attacks with a low rate can mislead many attack detection mechanisms. In this paper, we introduce a mechanism which not only detects distributed denial of service attacks and low rate attacks, but can also identify the attackers accurately. We detect and prevent flooding attacks in SIP protocol using Shannon (FDP-S), Renyi (FDP-R) and Tsallis (FDP-T) entropy. We conducted an experiment to compare the percentage of detection and rate of false alarm messages using any of the Shannon, Renyi and Tsallis entropy as a measure of disorder. Implementation results show that, according to the parametric nature of the Renyi and Tsallis entropy, by changing the parameters, different detection percentages and false alarm rates will be gained with the possibility to adjust the sensitivity of the detection mechanism.

Keywords: VOIP networks, flooding attacks, entropy, computer networks

Procedia PDF Downloads 367
978 Internet of Things for Smart Dedicated Outdoor Air System in Buildings

Authors: Dararat Tongdee, Surapong Chirarattananon, Somchai Maneewan, Chantana Punlek

Abstract:

Recently, the Internet of Things (IoT) is the important technology that connects devices to the network and people can access real-time communication. This technology is used to report, collect, and analyze the big data for achieving a purpose. For a smart building, there are many IoT technologies that enable management and building operators to improve occupant thermal comfort, indoor air quality, and building energy efficiency. In this research, we propose monitoring and controlling performance of a smart dedicated outdoor air system (SDOAS) based on IoT platform. The SDOAS was specifically designed with the desiccant unit and thermoelectric module. The designed system was intended to monitor, notify, and control indoor environmental factors such as temperature, humidity, and carbon dioxide (CO₂) level. The SDOAS was tested under the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE 62.2) and indoor air quality standard. The system will notify the user by Blynk notification when the status of the building is uncomfortable or tolerable limits are reached according to the conditions that were set. The user can then control the system via a Blynk application on a smartphone. The experimental result indicates that the temperature and humidity of indoor fresh air in the comfort zone are approximately 26 degree Celsius and 58% respectively. Furthermore, the CO₂ level was controlled lower than 1000 ppm by indoor air quality standard condition. Therefore, the proposed system can efficiently work and be easy to use for buildings.

Keywords: internet of things, indoor air quality, smart dedicated outdoor air system, thermal comfort

Procedia PDF Downloads 164
977 The Prevalence and Impact of Anxiety Among Medical Students in the MENA Region: A Systematic Review, Meta-Analysis, and Meta-Regression

Authors: Kawthar F. Albasri, Abdullah M. AlHudaithi, Dana B. AlTurairi, Abdullaziz S. AlQuraini, Adoub Y. AlDerazi, Reem A. Hubail, Haitham A. Jahrami

Abstract:

Several studies have found that medical students have a significant prevalence of anxiety. The purpose of this review paper is to carefully evaluate the current research on anxiety among medical students in the MENA region and, as a result, estimate the prevalence of these disturbances. Multiple databases, including the CINAHL (Cumulative Index to Nursing and Allied Health Literature), Cochrane Library, Embase, MEDLINE (Medical Literature Analysis and Retrieval System Online), PubMed, PsycINFO (Psychological Information Database), Scopus, Web of Science, UpToDate, ClinicalTrials.gov, WHO Global Health Library, EbscoHost, ProQuest, JAMA Network, and ScienceDirect, were searched. The retrieved article reference lists were rigorously searched and rated for quality. A random effects meta-analysis was performed to compute estimates. The current meta-analysis revealed an alarming estimated pooled prevalence of anxiety (K = 46, N = 27023) of 52.5% [95%CI: 43.3%–61.6%]. A total of 62.0% [95% CI 42.9%; 78.0%] of the students (K = 18, N = 16466) suffered from anxiety during the COVID-19 pandemic, while 52.5% [95% CI 43.3%; 61.6%] had anxiety before COVID-19. Based on the GAD-7 measure, a total of 55.7% [95%CI 30.5%; 78.3%] of the students (K = 10, N = 5830) had anxiety, and a total of 54.7% of the students (K = 18, N = 12154) [95%CI 42.8%; 66.0%] had anxiety using the DASS-21 or 42 measure. Anxiety is a common issue among medical students, making it a genuine problem. Further research should be conducted post-COVD 19, with a focus on anxiety prevention and intervention initiatives for medical students.

Keywords: anxiety, medical students, MENA, meta-analysis, prevalence

Procedia PDF Downloads 38
976 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India

Authors: Pardhasaradhi Gudla, Imanual A.

Abstract:

The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.

Keywords: offshore wind power, optimized techniques, power system, sub sea cable

Procedia PDF Downloads 155
975 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture

Authors: Sajjad Akbar, Rabia Bashir

Abstract:

With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.

Keywords: agent based web content mining, content centric networking, information centric networking

Procedia PDF Downloads 438
974 Vascularized Adipose Tissue Engineering by Using Adipose ECM/Fibroin Hydrogel

Authors: Alisan Kayabolen, Dilek Keskin, Ferit Avcu, Andac Aykan, Fatih Zor, Aysen Tezcaner

Abstract:

Adipose tissue engineering is a promising field for regeneration of soft tissue defects. However, only very thin implants can be used in vivo since vascularization is still a problem for thick implants. Another problem is finding a biocompatible scaffold with good mechanical properties. In this study, the aim is to develop a thick vascularized adipose tissue that will integrate with the host, and perform its in vitro and in vivo characterizations. For this purpose, a hydrogel of decellularized adipose tissue (DAT) and fibroin was produced, and both endothelial cells and adipocytes that were differentiated from adipose derived stem cells were encapsulated in this hydrogel. Mixing DAT with fibroin allowed rapid gel formation by vortexing. It also provided to adjust mechanical strength by changing fibroin to DAT ratio. Based on compression tests, gels of DAT/fibroin ratio with similar mechanical properties to adipose tissue was selected for cell culture experiments. In vitro characterizations showed that DAT is not cytotoxic; on the contrary, it has many natural ECM components which provide biocompatibility and bioactivity. Subcutaneous implantation of hydrogels resulted with no immunogenic reaction or infection. Moreover, localized empty hydrogels gelled successfully around host vessel with required shape. Implantations of cell encapsulated hydrogels and histological analyses are under study. It is expected that endothelial cells inside the hydrogel will form a capillary network and they will bind to the host vessel passing through hydrogel.

Keywords: adipose tissue engineering, decellularization, encapsulation, hydrogel, vascularization

Procedia PDF Downloads 502
973 Binderless Naturally-extracted Metal-free Electrocatalyst for Efficient NOₓ Reduction

Authors: Hafiz Muhammad Adeel Sharif, Tian Li, Changping Li

Abstract:

Recently, the emission of nitrogen-sulphur oxides (NOₓ, SO₂) has become a global issue and causing serious threats to health and the environment. Catalytic reduction of NOx and SOₓ gases into friendly gases is considered one of the best approaches. However, regeneration of the catalyst, higher bond-dissociation energy for NOx, i.e., 150.7 kcal/mol, escape of intermediate gas (N₂O, a greenhouse gas) with treated flue-gas, and limited activity of catalyst remains a great challenge. Here, a cheap, binderless naturally-extracted bass-wood thin carbon electrode (TCE) is presented, which shows excellent catalytic activity towards NOx reduction. The bass-wood carbonization at 900 ℃ followed by thermal activation in the presence of CO2 gas at 750 ℃. The thermal activation resulted in an increase in epoxy groups on the surface of the TCE and enhancement in the surface area as well as the degree of graphitization. The TCE unique 3D strongly inter-connected network through hierarchical micro/meso/macro pores that allow large electrode/electrolyte interface. Owing to these characteristics, the TCE exhibited excellent catalytic efficiency towards NOx (~83.3%) under ambient conditions and enhanced catalytic response under pH and sulphite exposure as well as excellent stability up to 168 hours. Moreover, a temperature-dependent activity trend was found where the highest catalytic activity was achieved at 80 ℃, beyond which the electrolyte became evaporative and resulted in a performance decrease. The designed electrocatalyst showed great potential for effective NOx-reduction, which is highly cost-effective, green, and sustainable.

Keywords: electrocatalyst, NOx-reduction, bass-wood electrode, integrated wet-scrubbing, sustainable

Procedia PDF Downloads 41
972 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 78
971 Stem Cell Fate Decision Depending on TiO2 Nanotubular Geometry

Authors: Jung Park, Anca Mazare, Klaus Von Der Mark, Patrik Schmuki

Abstract:

In clinical application of TiO2 implants on tooth and hip replacement, migration, adhesion and differentiation of neighboring mesenchymal stem cells onto implant surfaces are critical steps for successful bone regeneration. In a recent decade, accumulated attention has been paid on nanoscale electrochemical surface modifications on TiO2 layer for improving bone-TiO2 surface integration. We generated, on titanium surfaces, self-assembled layers of vertically oriented TiO2 nanotubes with defined diameters between 15 and 100 nm and here we show that mesenchymal stem cells finely sense TiO2 nanotubular geometry and quickly decide their cell fate either to differentiation into osteoblasts or to programmed cell death (apoptosis) on TiO2 nanotube layers. These cell fate decisions are critically dependent on nanotube size differences (15-100nm in diameters) of TiO2 nanotubes sensing by integrin clustering. We further demonstrate that nanoscale topography-sensing is feasible not only in mesenchymal stem cells but rather seems as generalized nanoscale microenvironment-cell interaction mechanism in several cell types composing bone tissue network including osteoblasts, osteoclast, endothelial cells and hematopoietic stem cells. Additionally we discuss the synergistic effect of simultaneous stimulation by nanotube-bound growth factor and nanoscale topographic cues on enhanced bone regeneration.

Keywords: TiO2 nanotube, stem cell fate decision, nano-scale microenvironment, bone regeneration

Procedia PDF Downloads 404
970 Subway Stray Current Effects on Gas Pipelines in the City of Tehran

Authors: Mohammad Derakhshani, Saeed Reza Allahkarama, Michael Isakhani-Zakaria, Masoud Samadian, Hojjat Sharifi Rasaey

Abstract:

In order to investigate the effects of stray current from DC traction systems (subway) on cathodically protected gas pipelines, the subway and the gas network maps in the city of Tehran were superimposed and a comprehensive map was prepared. 213 intersections and about 100150 meters of parallel sections of gas pipelines were found with respect to the railway right of way which was specified for field measurements. The potential measurements data were logged for one hour in each test point. 24-hour potential monitoring was carried out in selected test points as well. Results showed that dynamic stray current from subway on pipeline potential appears as fluctuations in its static potential that is visible in the diagrams during night periods. These fluctuations can cause the pipeline potential to exit the safe zone and lead to corrosion or overprotection. In this study, a maximum potential shift of 100 mv in the pipe-to-soil potential was considered as a criterion for dynamic stray current effective presence. Results showed that a potential fluctuation range between 100 mV to 3 V exists in measured points on pipelines which exceeds the proposed criterion and needs to be investigated. Corrosion rates influenced by stray currents were calculated using coupons. Results showed that coupon linked to the pipeline in one of the locations at region 1 of the city of Tehran has a corrosion rate of 4.2 mpy (with cathodic protection and under influence of stray currents) which is about 1.5 times more than free corrosion rate of 2.6 mpy.

Keywords: stray current, DC traction, subway, buried Pipelines, cathodic protection list

Procedia PDF Downloads 789
969 Monitoring Synthesis of Biodiesel through Online Density Measurements

Authors: Arnaldo G. de Oliveira, Jr, Matthieu Tubino

Abstract:

The transesterification process of triglycerides with alcohols that occurs during the biodiesel synthesis causes continuous changes in several physical properties of the reaction mixture, such as refractive index, viscosity and density. Amongst them, density can be an useful parameter to monitor the reaction, in order to predict the composition of the reacting mixture and to verify the conversion of the oil into biodiesel. In this context, a system was constructed in order to continuously determine changes in the density of the reacting mixture containing soybean oil, methanol and sodium methoxide (30 % w/w solution in methanol), stirred at 620 rpm at room temperature (about 27 °C). A polyethylene pipe network connected to a peristaltic pump was used in order to collect the mixture and pump it through a coil fixed on the plate of an analytical balance. The collected mass values were used to trace a curve correlating the mass of the system to the reaction time. The density variation profile versus the time clearly shows three different steps: 1) the dispersion of methanol in oil causes a decrease in the system mass due to the lower alcohol density followed by stabilization; 2) the addition of the catalyst (sodium methoxide) causes a larger decrease in mass compared to the first step (dispersion of methanol in oil) because of the oil conversion into biodiesel; 3) the final stabilization, denoting the end of the reaction. This density variation profile provides information that was used to predict the composition of the mixture over the time and the reaction rate. The precise knowledge of the duration of the synthesis means saving time and resources on a scale production system. This kind of monitoring provides several interesting features such as continuous measurements without collecting aliquots.

Keywords: biodiesel, density measurements, online continuous monitoring, synthesis

Procedia PDF Downloads 546
968 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: artificial intelligence, COVID-19, depression detection, psychiatric disorder

Procedia PDF Downloads 95
967 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: cycle consistency, deformable multimodal image registration, deep learning, GAN

Procedia PDF Downloads 93
966 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 27
965 Iris Cancer Detection System Using Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.

Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera

Procedia PDF Downloads 465
964 Global Historical Distribution Range of Brown Bear (Ursus Arctos)

Authors: Tariq Mahmood, Faiza Lehrasab, Faraz Akrim, Muhammad Sajid nadeem, Muhammad Mushtaq, Unza waqar, Ayesha Sheraz, Shaista Andleeb

Abstract:

Brown bear (Ursus arctos), a member of the family Ursidae, is distributed in a wide range of habitats in North America, Europe and Asia. Suspectedly, the global distribution range of brown bears is decreasing at the moment due to various factors. The carnivore species is categorized as ‘Least Concern’ globally by the IUCN Red List of Threatened Species. However, there are some fragmented, small populations that are on the verge of extinction, as is in Pakistan, where the species is listed as ‘Critically Endangered’, with a declining population trend. Importantly, the global historical distribution range of brown bears is undocumented. Therefore, in the current study, we reconstructed and estimated the historical distribution range of brown bears using QGIS software and also analyzed the network of protected areas in the past and current ranges of the species. Results showed that brown bear was more widely distributed in historic times, encompassing 52.6 million km² area as compared to their current distribution of 38.8 million km², resulting in a total range contraction of up to approximately 28 %. In the past, a total of N = 62,234 protected Areas, covering approximately 3.89 million km² were present in the distribution range of the species, while now a total of N= 33,313 Protected Areas, covering approximately 2.75 million km² area, are present in the current distribution range of the brown bear. The brown bear distribution range in the protected areas has also contracted by 1.15 million km² and the total percentage reduction of PAs is 29%.

Keywords: brown bear, historic distribution, range contraction, protected areas

Procedia PDF Downloads 11
963 Genetic Diversity and Variation of Nigerian Pigeon (Columba livia domestica) Populations Based on the Mitochondrial Coi Gene

Authors: Foluke E. Sola-Ojo, Ibraheem A. Abubakar, Semiu F. Bello, Isiaka H. Fatima, Sule Bisola, Adesina M. Olusegun, Adeniyi C. Adeola

Abstract:

The domesticated pigeon, Columba livia domestica, has many valuable characteristics, including high nutritional value and fast growth rate. There is a lack of information on its genetic diversity in Nigeria; thus, the genetic variability in mitochondrial cytochrome oxidase subunit I (COI) sequences of 150 domestic pigeons from four different locations was examined. Three haplotypes (HT) were identified in Nigerian populations; the most common haplotype, HT1, was shared with wild and domestic pigeons from Europe, America, and Asia, while HT2 and HT3 were unique to Nigeria. The overall haplotype diversity was 0.052± 0.025, and nucleotide diversity was 0.026± 0.068 across the four investigated populations. The phylogenetic tree showed significant clustering and genetic relationship of Nigerian domestic pigeons with other global pigeons. The median-joining network showed a star-like pattern suggesting population expansion. AMOVA results indicated that genetic variations in Nigerian pigeons mainly occurred within populations (99.93%), while the Neutrality tests results suggested that the Nigerian domestic pigeons’ population experienced recent expansion. This study showed a low genetic diversity and population differentiation among Nigerian domestic pigeons consistent with a relatively conservative COI sequence with few polymorphic sites. Furthermore, the COI gene could serve as a candidate molecular marker to investigate the genetic diversity and origin of pigeon species. The current data is insufficient for further conclusions; therefore, more research evidence from multiple molecular markers is required.

Keywords: Nigeria pigeon, COI, genetic diversity, genetic variation, conservation

Procedia PDF Downloads 137
962 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 163
961 Applying Concurrent Development Process for the Web Using Aspect-Oriented Approach

Authors: Hiroaki Fukuda

Abstract:

This paper shows a concurrent development process for modern web application, called Rich Internet Application (RIA), and describes its effect using a non-trivial application development. In the last years, RIAs such as Ajax and Flex have become popular based mainly on high-speed network. RIA provides sophisticated interfaces and user experiences, therefore, the development of RIA requires two kinds of engineer: a developer who implements business logic, and a designer who designs interface and experiences. Although collaborative works are becoming important for the development of RIAs, shared resources such as source code make it difficult. For example, if a design of interface is modified after developers have finished business logic implementations, they need to repeat the same implementations, and also tests to verify application’s behavior. MVC architecture and Object-oriented programming (OOP) enables to dividing an application into modules such as interfaces and logic, however, developers and/or designers have to write pieces of code (e.g., event handlers) that make these modules work as an application. On the other hand, Aspect-oriented programming (AOP) is ex- pected to solve complexity of application software development nowadays. AOP provides methods to separate crosscutting concerns that are scattered pieces of code from primary concerns. In this paper, we provide a concurrent development process for RIAs by introducing AOP concept. This process makes it possible to reduce shared resources between developers and designers, therefore they can perform their tasks concurrently. In addition, we describe experiences of development for a practical application using our proposed development process to show its availability.

Keywords: aspect-oriented programming, concurrent, development process, rich internet application

Procedia PDF Downloads 276
960 Expression Profiling and Immunohistochemical Analysis of Squamous Cell Carcinoma of Head and Neck (Tumor, Transition Zone, Normal) by Whole Genome Scale Sequencing

Authors: Veronika Zivicova, Petr Broz, Zdenek Fik, Alzbeta Mifkova, Jan Plzak, Zdenek Cada, Herbert Kaltner, Jana Fialova Kucerova, Hans-Joachim Gabius, Karel Smetana Jr.

Abstract:

The possibility to determine genome-wide expression profiles of cells and tissues opens a new level of analysis in the quest to define dysregulation in malignancy and thus identify new tumor markers. Toward this long-term aim, we here address two issues on this level for head and neck cancer specimen: i) defining profiles in different regions, i.e. the tumor, the transition zone and normal control and ii) comparing complete data sets for seven individual patients. Special focus in the flanking immunohistochemical part is given to adhesion/growth-regulatory galectins that upregulate chemo- and cytokine expression in an NF-κB-dependent manner, to these regulators and to markers of differentiation, i.e. keratins. The detailed listing of up- and down-regulations, also available in printed form (1), not only served to unveil new candidates for testing as marker but also let the impact of the tumor in the transition zone become apparent. The extent of interindividual variation raises a strong cautionary note on assuming uniformity of regulatory events, to be noted when considering therapeutic implications. Thus, a combination of test targets (and a network analysis for galectins and their downstream effectors) is (are) advised prior to reaching conclusions on further perspectives.

Keywords: galectins, genome scale sequencing, squamous cell carcinoma, transition zone

Procedia PDF Downloads 207
959 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 101
958 A Research on the Coordinated Development of Chengdu-Chongqing Economic Circle under the Background of New Urbanization

Authors: Deng Tingting

Abstract:

The coordinated and integrated development of regions is an inevitable requirement for China to move towards high-quality, sustainable development. As one of the regions with the best economic foundation and the strongest economic strength in western China, it is a typical area with national importance and strong network connection characteristics in terms of the comprehensive effect of linking the inland hinterland and connecting the western and national urban networks. The integrated development of the Chengdu-Chongqing economic circle is of great strategic significance for the rapid and high-quality development of the western region. In the context of new urbanization, this paper takes 16 urban units within the economic circle as the research object, based on the 5-year panel data of population, regional economy, and spatial construction and development from 2016 to 2020, using the entropy method and Theil index to analyze the three target layers, and cause analysis. The research shows that there are temporal and spatial differences in the Chengdu-Chongqing economic circle, and there are significant differences between the core city and the surrounding cities. Therefore, by reforming and innovating the regional coordinated development mechanism, breaking administrative barriers, and strengthening the "polar nucleus" radiation function to release the driving force for economic development, especially in the gully areas of economic development belts, not only promote the coordinated development of internal regions but also promote the coordinated and sustainable development of the western region and take a high-quality development path.

Keywords: Chengdu-Chongqing economic circle, new urbanization, coordinated regional development, Theil Index

Procedia PDF Downloads 80