Search results for: lymph nodes
131 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 99130 Public Squares and Their Potential for Social Interactions: A Case Study of Historical Public Squares in Tehran
Authors: Asma Mehan
Abstract:
Under the thrust of technological changes, population growth and vehicular traffic, Iranian historical squares have lost their significance and they are no longer the main social nodes of the society. This research focuses on how historical public squares can inspire designers to enhance social interactions among citizens in Iranian urban context. Moreover, the recent master plan of Tehran demonstrates the lack of public spaces designed for the purpose of people’s social gatherings. For filling this gap, first the current situation of 7 selected primary historical public squares in Tehran including Sabze Meydan, Arg, Topkhaneh, Baherstan, Mokhber-al-dole, Rah Ahan and Hassan Abad have been compared. Later, the influencing elements on social interactions of the public squares such as subjective factors (human relationships and memories) and objective factors (natural and built environment) have been investigated. As a conclusion, some strategies are proposed for improving social interactions in historical public squares like; holding cultural, national, athletic and religious events, defining different and new functions in public squares’ surrounding, increasing pedestrian routs, reviving the collective memory, demonstrating the historical importance of square, eliminating visual obstacles across the square, organization the natural elements of the square, appropriate pavement for social activities. Finally, it is argued that the combination of all influencing factors which are: human interactions, natural elements and built environment criteria will lead to enhance the historical public squares’ potential for social interaction.Keywords: historical square, Iranian public square, social interaction, Tehran
Procedia PDF Downloads 403129 Case Report of Left Atrial Myxoma Diagnosed by Bedside Echocardiography
Authors: Anthony S. Machi, Joseph Minardi
Abstract:
We present a case report of left atrial myxoma diagnosed by bedside transesophageal (TEE) ultrasound. Left atrial myxoma is the most common benign cardiac tumor and can obstruct blood flow and cause valvular insufficiency. Common symptoms consist of dyspnea, pulmonary edema and other features of left heart failure in addition to thrombus release in the form of tumor fragments. The availability of bedside ultrasound equipment is essential for the quick diagnosis and treatment of various emergency conditions including cardiac neoplasms. A 48-year-old Caucasian female with a four-year history of an untreated renal mass and anemia presented to the ED with two months of sharp, intermittent, bilateral flank pain radiating into the abdomen. She also reported intermittent vomiting and constipation along with generalized body aches, night sweats, and 100-pound weight loss over last year. She had a CT in 2013 showing a 3 cm left renal mass and a second CT in April 2016 showing a 3.8 cm left renal mass along with a past medical history of diverticulosis, chronic bronchitis, dyspnea on exertion, uncontrolled hypertension, and hyperlipidemia. Her maternal family history is positive for breast cancer, hypertension, and Type II Diabetes. Her paternal family history is positive for stroke. She was a current everyday smoker with an 11 pack/year history. Alcohol and drug use were denied. Physical exam was notable for a Grade II/IV systolic murmur at the right upper sternal border, dyspnea on exertion without angina, and a tender left lower quadrant. Her vitals and labs were notable for a blood pressure of 144/96, heart rate of 96 beats per minute, pulse oximetry of 96%, hemoglobin of 7.6 g/dL, hypokalemia, hypochloremia, and multiple other abnormalities. Physicians ordered a CT to evaluate her flank pain which revealed a 7.2 x 8.9 x 10.5 cm mixed cystic/solid mass in the lower pole of the left kidney and a filling defect in the left atrium. Bedside TEE was ordered to follow up on the filling defect. TEE reported an ejection fraction of 60-65% and visualized a mobile 6 x 3 cm mass in the left atrium attached to the interatrial septum extending into the mitral valve. Cardiothoracic Surgery and Urology were consulted and confirmed a diagnosis of left atrial myxoma and clear cell renal cell carcinoma. The patient returned a week later due to worsening nausea and vomiting and underwent emergent nephrectomy, lymph node dissection, and colostomy due to a necrotic colon. Her condition declined over the next four months due to lung and brain metastases, infections, and other complications until she passed away.Keywords: bedside ultrasound, echocardiography, emergency medicine, left atrial myxoma
Procedia PDF Downloads 329128 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph
Abstract:
In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.Keywords: graph attention network, knowledge graph, recommendation, information propagation
Procedia PDF Downloads 115127 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact
Authors: Abdullah Almowanes
Abstract:
The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia
Procedia PDF Downloads 245126 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case
Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht
Abstract:
Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL
Procedia PDF Downloads 82125 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture
Authors: Sajjad Akbar, Rabia Bashir
Abstract:
With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.Keywords: agent based web content mining, content centric networking, information centric networking
Procedia PDF Downloads 473124 Insights Into Serotonin-Receptor Binding and Stability via Molecular Dynamics Simulations: Key Residues for Electrostatic Interactions and Signal Transduction
Authors: Arunima Verma, Padmabati Mondal
Abstract:
Serotonin-receptor binding plays a key role in several neurological and biological processes, including mood, sleep, hunger, cognition, learning, and memory. In this article, we performed molecular dynamics simulation to examine the key residues that play an essential role in the binding of serotonin to the G-protein-coupled 5-HT₁ᴮ receptor (5-HT₁ᴮ R) via electrostatic interactions. An end-point free energy calculation method (MM-PBSA) determines the stability of the 5-HT1B R due to serotonin binding. The single-point mutation of the polar or charged amino acid residues (Asp129, Thr134) on the binding sites and the calculation of binding free energy validate the importance of these residues in the stability of the serotonin-receptor complex. Principal component analysis indicates the serotonin-bound 5-HT1BR is more stabilized than the apo-receptor in terms of dynamical changes. The difference dynamic cross-correlations map shows the correlation between the transmembrane and mini-Go, which indicates signal transduction happening between mini-Go and the receptor. Allosteric communication reveals the key nodes for signal transduction in 5-HT1BR. These results provide useful insights into the signal transduction pathways and mutagenesis study to regulate the functionality of the complex. The developed protocols can be applied to study local non-covalent interactions and long-range allosteric communications in any protein-ligand system for computer-aided drug design.Keywords: allostery, CADD, MD simulations, MM-PBSA
Procedia PDF Downloads 85123 Histone Deacetylases Inhibitor - Valproic Acid Sensitizes Human Melanoma Cells for alkylating agent and PARP inhibitor
Authors: Małgorzata Drzewiecka, Tomasz Śliwiński, Maciej Radek
Abstract:
The inhibition of histone deacetyles (HDACs) holds promise as a potential anti-cancer therapy because histone and non-histone protein acetylation is frequently disrupted in cancer, leading to cancer initiation and progression. Additionally, histone deacetylase inhibitors (HDACi) such as class I HDAC inhibitor - valproic acid (VPA) have been shown to enhance the effectiveness of DNA-damaging factors, such as cisplatin or radiation. In this study, we found that, using of VPA in combination with talazoparib (BMN-637 – PARP1 inhibitor – PARPi) and/or Dacarabazine (DTIC - alkylating agent) resulted in increased DNA double strand break (DSB) and reduced survival (while not affecting primary melanocytes )and proliferation of melanoma cells. Furthermore, pharmacologic inhibition of class I HDACs sensitizes melanoma cells to apoptosis following exposure to DTIC and BMN-637. In addition, inhibition of HDAC caused sensitization of melanoma cells to dacarbazine and BMN-637 in melanoma xenografts in vivo. At the mRNA and protein level histone deacetylase inhibitor downregulated RAD51 and FANCD2. This study provides that combining HDACi, alkylating agent and PARPi could potentially enhance the treatment of melanoma, which is known for being one of the most aggressive malignant tumors. The findings presented here point to a scenario in which HDAC via enhancing the HR-dependent repair of DSBs created during the processing of DNA lesions, are essential nodes in the resistance of malignant melanoma cells to methylating agent-based therapies.Keywords: melanoma, hdac, parp inhibitor, valproic acid
Procedia PDF Downloads 81122 A Five-Year Experience of Intensity Modulated Radiotherapy in Nasopharyngeal Carcinomas in Tunisia
Authors: Omar Nouri, Wafa Mnejja, Fatma Dhouib, Syrine Zouari, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Leila Farhat, Nejla Fourati, Jamel Daoud
Abstract:
Purpose and Objective: Intensity modulated radiation (IMRT) technique, associated with induction chemotherapy (IC) and/or concomitant chemotherapy (CC), is actually the recommended treatment modality for nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the therapeutic results and the patterns of relapse with this treatment protocol. Material and methods: A retrospective monocentric study of 145 patients with NPC treated between June 2016 and July 2021. All patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. The high-risk volume dose was 66.5 Gy in children. Survival analysis was performed according to the Kaplan-Meier method, and the Log-rank test was used to compare factors that may influence survival. Results: Median age was 48 years (11-80) with a sex ratio of 2.9. One hundred-twenty tumors (82.7%) were classified as stages III-IV according to the 2017 UICC TNM classification. Ten patients (6.9%) were metastatic at diagnosis. One hundred-thirty-five patient (93.1%) received IC, 104 of which (77%) were TPF-based (taxanes, cisplatin and 5 fluoro-uracil). One hundred-thirty-eight patient (95.2%) received CC, mostly cisplatin in 134 cases (97%). After a median follow-up of 50 months [22-82], 46 patients (31.7%) had a relapse: 12 (8.2%) experienced local and/or regional relapse after a median of 18 months [6-43], 29 (20%) experienced distant relapse after a median of 9 months [2-24] and 5 patients (3.4%) had both. Thirty-five patients (24.1%) died, including 5 (3.4%) from a cause other than their cancer. Three-year overall survival (OS), cancer specific survival, disease free survival, metastasis free survival and loco-regional free survival were respectively 78.1%, 81.3%, 67.8%, 74.5% and 88.1%. Anatomo-clinic factors predicting OS were age > 50 years (88.7 vs. 70.5%; p=0.004), diabetes history (81.2 vs. 66.7%; p=0.027), UICC N classification (100 vs. 95 vs. 77.5 vs. 68.8% respectively for N0, N1, N2 and N3; p=0.008), the practice of a lymph node biopsy (84.2 vs. 57%; p=0.05), and UICC TNM stages III-IV (93.8 vs. 73.6% respectively for stage I-II vs. III-IV; p=0.044). Therapeutic factors predicting OS were a number of CC courses (less than 4 courses: 65.8 vs. 86%; p=0.03, less than 5 courses: 71.5 vs. 89%; p=0.041), a weight loss > 10% during treatment (84.1 vs. 60.9%; p=0.021) and a total cumulative cisplatin dose, including IC and CC, < 380 mg/m² (64.4 vs. 87.6%; p=0.003). Radiotherapy delay and total duration did not significantly affect OS. No grade 3-4 late side effects were noted in the evaluable 127 patients (87.6%). The most common toxicity was dry mouth which was grade 2 in 47 cases (37%) and grade 1 in 55 cases (43.3%).Conclusion: IMRT for nasopharyngeal carcinoma granted a high loco-regional control rate for patients during the last five years. However, distant relapses remain frequent and conditionate the prognosis. We identified many anatomo-clinic and therapeutic prognosis factors. Therefore, high-risk patients require a more aggressive therapeutic approach, such as radiotherapy dose escalation or adding adjuvant chemotherapy.Keywords: therapeutic results, prognostic factors, intensity-modulated radiotherapy, nasopharyngeal carcinoma
Procedia PDF Downloads 62121 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network
Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi
Abstract:
Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication
Procedia PDF Downloads 449120 Variability for Nodulation and Yield Traits in Biofertilizer Treated and Untreated Pea (Pisum sativum L.) Varieties
Authors: Areej Javaid, Nishat Fatima, Mehwish Naseer
Abstract:
There is a tremendous use of biofertilizers in agriculture to increase crop productivity. Pakistan spends a huge amount on the purchase of synthetic fertilizers every year. The use of natural compounds to harness crop productivity is the major area of interest nowadays due to being safe for human health and the environment as well. Legumes have the intrinsic quality to enrich the nutrient status of soil because of the presence of nitrogen fixation bacteria on nodules. This research determined the effect of biofertilizer on nodulation attributes and yield of the pea plant. Seeds of pea varieties were treated with a slurry of biofertilizer prepared in a 10% sugar solution just before seed sowing. The impact of biofertilizer on different parameters of growth, yield and nodulation was observed. Analysis of variance showed that plant height, days to flowering, number of nodes, days to first pod, root length and plant height exhibited significant genetic variation. All the yield parameters, including the number of pods per plant, number of seeds per pod, seed fresh and dry weight showed significant results under treatment. Among nodulation parameters, nodule number responded positively to biofertilizer treatment. Genotypes 2001-40 showed better performance followed by 2001-20 and LINA-PAK in all the parameters, whereas 2001-40 and 2001-20 performed well in nodulation and yield parameters. Consequently, seed treatment with biofertilizer before sowing is recommended to obtain higher crop yield.Keywords: biological nitrogen fixation, correlation analysis, quantitative inheritance, varietal responses
Procedia PDF Downloads 151119 Borrowing Performance: A Network Connectivity Analysis of Second-Tier Cities in Turkey
Authors: Eğinç Simay Ertürk, Ferhan Gezi̇ci̇
Abstract:
The decline of large cities and the rise of second-tier cities have been observed as a global trend with significant implications for economic development and urban planning. In this context, the concepts of agglomeration shadow and borrowed size have gained importance as network externalities that affect the growth and development of surrounding areas. Istanbul, Izmir, and Ankara are Turkey's most significant metropolitan cities and play a significant role in the country's economy. The surrounding cities rely on these metropolitan cities for economic growth and development. However, the concentration of resources and investment in a single location can lead to agglomeration shadows in the surrounding areas. On the other hand, network connectivity between metropolitan and second-tier cities can result in borrowed function and performance, enabling smaller cities to access resources, investment, and knowledge they would not otherwise have access. The study hypothesizes that the network connectivity between second-tier and metropolitan cities in Turkey enables second-tier cities to increase their urban performance by borrowing size through these networks. Regression analysis will be used to identify specific network connectivity parameters most strongly associated with urban performance. Network connectivity will be measured with parameters such as transportation nodes and telecommunications infrastructure, and urban performance will be measured with an index, including parameters such as employment, education, and industry entrepreneurship, with data at the province levels. The contribution of the study lies in its research on how networking can benefit second-tier cities in Turkey.Keywords: network connectivity, borrowed size, agglomeration shadow, secondary cities
Procedia PDF Downloads 81118 Improving the Global Competitiveness of SMEs by Logistics Transportation Management: Case Study Chicken Meat Supply Chain
Authors: P. Vanichkobchinda
Abstract:
The Logistics Transportation techniques, Open Vehicle Routing (OVR) is an approach toward transportation cost reduction, especially for long distance pickup and delivery nodes. The outstanding characteristic of OVR is that the route starting node and ending node are not necessary the same as in typical vehicle routing problems. This advantage enables the routing to flow continuously and the vehicle does not always return to its home base. This research aims to develop a heuristic for the open vehicle routing problem with pickup and delivery under time window and loading capacity constraints to minimize the total distance. The proposed heuristic is developed based on the Insertion method, which is a simple method and suitable for the rapid calculation that allows insertion of the new additional transportation requirements along the original paths. According to the heuristic analysis, cost comparisons between the proposed heuristic and companies are using method, nearest neighbor method show that the insertion heuristic. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing. The research indicates that the improvement of new transport's calculation and the open vehicle routing with "Insertion Heuristic" represent a better outcome with 34.3 percent in average. in cost savings. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing.Keywords: business competitiveness, cost reduction, SMEs, logistics transportation, VRP
Procedia PDF Downloads 683117 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System
Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya
Abstract:
The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector
Procedia PDF Downloads 174116 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 155115 Rail Corridors between Minimal Use of Train and Unsystematic Tightening of Population: A Methodological Essay
Authors: A. Benaiche
Abstract:
In the current situation, the automobile has become the main means of locomotion. It allows traveling long distances, encouraging urban sprawl. To counteract this trend, the train is often proposed as an alternative to the car. Simultaneously, the favoring of urban development around public transport nodes such as railway stations is one of the main issues of the coordination between urban planning and transportation and the keystone of the sustainable urban development implementation. In this context, this paper focuses on the study of the spatial structuring dynamics around the railway. Specifically, it is a question of studying the demographic dynamics in rail corridors of Nantes, Angers and Le Mans (Western France) basing on the radiation of railway stations. Consequently, the methodology is concentrated on the knowledge of demographic weight and gains of these corridors, the index of urban intensity and the mobility behaviors (workers’ travels, scholars' travels, modal practices of travels). The perimeter considered to define the rail corridors includes the communes of urban area which have a railway station and communes with an access time to the railway station is less than fifteen minutes by car (time specified by the Regional Transport Scheme of Travelers). The main tools used are the statistical data from the census of population, the basis of detailed tables and databases on mobility flows. The study reveals that the population is not tightened along rail corridors and train use is minimal despite the presence of a nearby railway station. These results lead to propose guidelines to make the train, a real vector of mobility across the rail corridors.Keywords: coordination between urban planning and transportation, rail corridors, railway stations, travels
Procedia PDF Downloads 243114 Numerical Simulation of Phase Transfer during Cryosurgery for an Irregular Tumor Using Hybrid Approach
Authors: Rama Bhargava, Surabhi Nishad
Abstract:
The infusion of nanofluids has dramatically enhanced the heat-carrying capacity of the fluids, applicable to many engineering and medical process where the temperature below freezing is required. Cryosurgery is an efficient therapy for the treatment of cancer, but sometimes the excessive cooling may harm the nearby healthy cells. Efforts are therefore done to develop a model which can cause to generate the low temperature as required. In the present study, a mathematical model is developed based on the bioheat transfer equation to simulate the heat transfer from the probe on a tumor (with irregular domain) using the hybrid technique consisting of element free Galerkin method with αα-family of approximation. The probe is loaded will nano-particles. The effects of different nanoparticles, namely Al₂O₃, Fe₃O₄, Au on the heat-producing rate, is obtained. It is observed that the temperature can be brought to (60°C)-(-30°C) at a faster freezing rate on the infusion of different nanoparticles. Besides increasing the freezing rate, the volume of the nanoparticle can also control the size and growth of ice crystals formed during the freezing process. The study is also made to find the time required to achieve the desired temperature. The problem is further extended for multi tumors of different shapes and sizes. The irregular shape of the frozen domain and the direction of ice growth are very sensitive issues, posing a challenge for simulation. The Meshfree method has been one of the accurate methods in such problems as a domain is naturally irregular. The discretization is done using the nodes only. MLS approximation is taken in order to generate the shape functions. Sufficiently accurate results are obtained.Keywords: cryosurgery, EFGM, hybrid, nanoparticles
Procedia PDF Downloads 120113 Analysis of Network Connectivity for Ship-To-Ship Maritime Communication Using IEEE 802.11 on Maritime Environment of Tanjung Perak, Indonesia
Authors: Ahmad Fauzi Makarim, Okkie Puspitorini, Hani'ah Mahmudah, Nur Adi Siswandari, Ari Wijayanti
Abstract:
As a maritime country, Indonesia needs a solution in maritime connectivity which can assist the maritime communication system which including communication from harbor to the ship or ship to ship. The needs of many application services for maritime communication, whether for safety reasons until voyage service to help the process of voyage activity needs connection with a high bandwith. To support the government efforts in handling that kind of problem, a research is conducted in maritime communication issue by applying the new developed technology in Indonesia, namely IEEE 802.11. In this research, 3 outdoor WiFi devices are used in which have a frequency of 5.8 GHz. Maritime of Tanjung Perak harbor in Surabaya until Karang Jamuang Island are used as the location of the research with defining permission of ship node spreading by Navigation District Class 1. That maritime area formed by state 1 and state 2 areas which are the narrow area with average wave height of 0.7 meter based on the data from BMKG S urabaya. After that, wave height used as one of the parameters which are used in analyzing characteristic of signal propagation at sea surface, so it can be determined on the coverage area of transmitter system. In this research has been used three samples of outdoor wifi, there is the coverage of device A can be determined about 2256 meter, device B 4000 meter, and device C 1174 meter. Then to analyze of network connectivity for the ship to ship is used AODV routing algorithm system based on the value of the power transmit was smallest of all nodes within the transmitter coverage.Keywords: maritime of Indonesia, maritime communications, outdoor wifi, coverage, AODV
Procedia PDF Downloads 349112 Integration of GIS with Remote Sensing and GPS for Disaster Mitigation
Authors: Sikander Nawaz Khan
Abstract:
Natural disasters like flood, earthquake, cyclone, volcanic eruption and others are causing immense losses to the property and lives every year. Current status and actual loss information of natural hazards can be determined and also prediction for next probable disasters can be made using different remote sensing and mapping technologies. Global Positioning System (GPS) calculates the exact position of damage. It can also communicate with wireless sensor nodes embedded in potentially dangerous places. GPS provide precise and accurate locations and other related information like speed, track, direction and distance of target object to emergency responders. Remote Sensing facilitates to map damages without having physical contact with target area. Now with the addition of more remote sensing satellites and other advancements, early warning system is used very efficiently. Remote sensing is being used both at local and global scale. High Resolution Satellite Imagery (HRSI), airborne remote sensing and space-borne remote sensing is playing vital role in disaster management. Early on Geographic Information System (GIS) was used to collect, arrange, and map the spatial information but now it has capability to analyze spatial data. This analytical ability of GIS is the main cause of its adaption by different emergency services providers like police and ambulance service. Full potential of these so called 3S technologies cannot be used in alone. Integration of GPS and other remote sensing techniques with GIS has pointed new horizons in modeling of earth science activities. Many remote sensing cases including Asian Ocean Tsunami in 2004, Mount Mangart landslides and Pakistan-India earthquake in 2005 are described in this paper.Keywords: disaster mitigation, GIS, GPS, remote sensing
Procedia PDF Downloads 479111 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 123110 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 67109 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method
Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati
Abstract:
An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)
Procedia PDF Downloads 283108 Data-Driven Simulations Tools for Der and Battery Rich Power Grids
Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili
Abstract:
Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools
Procedia PDF Downloads 102107 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific
Authors: Giuseppe Timperio, Robert De Souza
Abstract:
The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience
Procedia PDF Downloads 172106 To Design an Architectural Model for On-Shore Oil Monitoring Using Wireless Sensor Network System
Authors: Saurabh Shukla, G. N. Pandey
Abstract:
In recent times, oil exploration and monitoring in on-shore areas have gained much importance considering the fact that in India the oil import is 62 percent of the total imports. Thus, architectural model like wireless sensor network to monitor on-shore deep sea oil well is being developed to get better estimate of the oil prospects. The problem we are facing nowadays that we have very few restricted areas of oil left today. Countries like India don’t have much large areas and resources for oil and this problem with most of the countries that’s why it has become a major problem when we are talking about oil exploration in on-shore areas also the increase of oil prices has further ignited the problem. For this the use of wireless network system having relative simplicity, smallness in size and affordable cost of wireless sensor nodes permit heavy deployment in on-shore places for monitoring oil wells. Deployment of wireless sensor network in large areas will surely reduce the cost it will be very much cost effective. The objective of this system is to send real time information of oil monitoring to the regulatory and welfare authorities so that suitable action could be taken. This system architecture is composed of sensor network, processing/transmission unit and a server. This wireless sensor network system could remotely monitor the real time data of oil exploration and monitoring condition in the identified areas. For wireless sensor networks, the systems are wireless, have scarce power, are real-time, utilize sensors and actuators as interfaces, have dynamically changing sets of resources, aggregate behaviour is important and location is critical. In this system a communication is done between the server and remotely placed sensors. The server gives the real time oil exploration and monitoring conditions to the welfare authorities.Keywords: sensor, wireless sensor network, oil, sensor, on-shore level
Procedia PDF Downloads 444105 Modeling Loads Applied to Main and Crank Bearings in the Compression-Ignition Two-Stroke Engine
Authors: Marcin Szlachetka, Mateusz Paszko, Grzegorz Baranski
Abstract:
This paper discusses the AVL EXCITE Designer simulation research into loads applied to main and crank bearings in the compression-ignition two-stroke engine. There was created a model of engine lubrication system which covers the part of this system related to particular nodes of a bearing system, i.e. a connection of main bearings in an engine block with a crankshaft, a connection of crank pins with a connecting rod. The analysis focused on the load given as a distribution of hydrodynamic oil film pressure corresponding different values of radial internal clearance. There was also studied the impact of gas force on minimal oil film thickness in main and crank bearings versus crankshaft rotational speed. Our model calculates oil film parameters, an oil film pressure distribution, an oil temperature change and dimensions of bearings as well as an oil temperature distribution on surfaces of bearing seats. Accordingly, it was possible to select, for example, a correct clearance for each of the node bearings. The research was performed for several values of engine crankshaft speed ranging from 800 RPM to 4000 RPM. Bearing oil pressure was changed according to engine speed ranging between 1 bar and 5 bar and an oil temperature of 90°C. The main bearing clearances made initially for the calculation and research were: 0.015 mm, 0.025 mm, 0.035 mm, 0.05 mm, 0.1 mm. The oil used for the research corresponded the SAE 5W-40 classification. The paper presents the selected research results referring to certain specific operating points and bearing radial internal clearances. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: crank bearings, diesel engine, oil film, two-stroke engine
Procedia PDF Downloads 211104 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm
Authors: Monojit Manna, Arpan Adhikary
Abstract:
In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection
Procedia PDF Downloads 76103 Looking beyond Lynch's Image of a City
Authors: Sandhya Rao
Abstract:
Kevin Lynch’s Theory on Imeageability, let on explore a city in terms of five elements, Nodes, Paths, Edges, landmarks and Districts. What happens when we try to record the same data in an Indian context? What happens when we apply the same theory of Imageability to a complex shifting urban pattern of the Indian cities and how can we as Urban Designers demonstrate our role in the image building ordeal of these cities? The organizational patterns formed through mental images, of an Indian city is often diverse and intangible. It is also multi layered and temporary in terms of the spirit of the place. The pattern of images formed is loaded with associative meaning and intrinsically linked with the history and socio-cultural dominance of the place. The embedded memory of a place in one’s mind often plays an even more important role while formulating these images. Thus while deriving an image of a city one is often confused or finds the result chaotic. The images formed due to its complexity are further difficult to represent using a single medium. Under such a scenario it’s difficult to derive an output of an image constructed as well as make design interventions to enhance the legibility of a place. However, there can be a combination of tools and methods that allows one to record the key elements of a place through time, space and one’s user interface with the place. There has to be a clear understanding of the participant groups of a place and their time and period of engagement with the place as well. How we can translate the result obtained into a design intervention at the end, is the main of the research. Could a multi-faceted cognitive mapping be an answer to this or could it be a very transient mapping method which can change over time, place and person. How does the context influence the process of image building in one’s mind? These are the key questions that this research will aim to answer.Keywords: imageability, organizational patterns, legibility, cognitive mapping
Procedia PDF Downloads 312102 Mnemotopic Perspectives: Communication Design as Stabilizer for the Memory of Places
Authors: C. Galasso
Abstract:
The ancestral relationship between humans and geographical environment has long been at the center of an interdisciplinary dialogue, which sees one of its main research nodes in the relationship between memory and places. Given its deep complexity, this symbiotic connection continues to look for a proper definition that appears increasingly negotiated by different disciplines. Numerous fields of knowledge are involved, from anthropology to semiotics of space, from photography to architecture, up to subjects traditionally far from these reasonings. This is the case of Design of Communication, a young discipline, now confident in itself and its objectives, aimed at finding and investigating original forms of visualization and representation, between sedimented knowledge and new technologies. In particular, Design of Communication for the Territory offers an alternative perspective to the debate, encouraging the reactivation and reconstruction of the memory of places. Recognizing mnemotopes as a cultural object of vertical interpretation of the memory-place relationship, design can become a real mediator of the territorial fixation of memories, making them increasingly accessible and perceptible, contributing to build a topography of memory. According to a mnemotopic vision, Communication Design can support the passage from a memory in which the observer participates only as an individual to a collective form of memory. A mnemotopic form of Communication Design can, through geolocation and content map-based systems, make chronology a topography rooted in the territory and practicable; it can be useful to understand how the perception of the memory of places changes over time, considering how to insert them in the contemporary world. Mnemotopes can be materialized in different format of translation, editing and narration and then involved in complex systems of communication. The memory of places, therefore, if stabilized by the tools offered by Communication Design, can make visible ruins and territorial stratifications, illuminating them with new communicative interests that can be shared and participated.Keywords: memory of places, design of communication, territory, mnemotope, topography of memory
Procedia PDF Downloads 131